00:00:00.000 Started by upstream project "autotest-per-patch" build number 122838 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.558 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.569 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.581 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:06.581 > git config core.sparsecheckout # timeout=10 00:00:06.592 > git read-tree -mu HEAD # timeout=10 00:00:06.609 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:06.629 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:06.629 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:06.746 [Pipeline] Start of Pipeline 00:00:06.757 [Pipeline] library 00:00:06.759 Loading library shm_lib@master 00:00:06.759 Library shm_lib@master is cached. Copying from home. 00:00:06.777 [Pipeline] node 00:00:06.788 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.789 [Pipeline] { 00:00:06.804 [Pipeline] catchError 00:00:06.806 [Pipeline] { 00:00:06.820 [Pipeline] wrap 00:00:06.829 [Pipeline] { 00:00:06.839 [Pipeline] stage 00:00:06.841 [Pipeline] { (Prologue) 00:00:07.022 [Pipeline] sh 00:00:07.307 + logger -p user.info -t JENKINS-CI 00:00:07.325 [Pipeline] echo 00:00:07.326 Node: WFP8 00:00:07.335 [Pipeline] sh 00:00:07.631 [Pipeline] setCustomBuildProperty 00:00:07.642 [Pipeline] echo 00:00:07.644 Cleanup processes 00:00:07.648 [Pipeline] sh 00:00:07.931 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.931 749390 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.945 [Pipeline] sh 00:00:08.228 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.228 ++ grep -v 'sudo pgrep' 00:00:08.228 ++ awk '{print $1}' 00:00:08.228 + sudo kill -9 00:00:08.228 + true 00:00:08.244 [Pipeline] cleanWs 00:00:08.255 [WS-CLEANUP] Deleting project workspace... 00:00:08.255 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.261 [WS-CLEANUP] done 00:00:08.266 [Pipeline] setCustomBuildProperty 00:00:08.282 [Pipeline] sh 00:00:08.564 + sudo git config --global --replace-all safe.directory '*' 00:00:08.642 [Pipeline] nodesByLabel 00:00:08.643 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.653 [Pipeline] httpRequest 00:00:08.658 HttpMethod: GET 00:00:08.659 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.663 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.676 Response Code: HTTP/1.1 200 OK 00:00:08.676 Success: Status code 200 is in the accepted range: 200,404 00:00:08.677 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.579 [Pipeline] sh 00:00:10.865 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.885 [Pipeline] httpRequest 00:00:10.890 HttpMethod: GET 00:00:10.891 URL: http://10.211.164.101/packages/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:10.893 Sending request to url: http://10.211.164.101/packages/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:10.896 Response Code: HTTP/1.1 200 OK 00:00:10.897 Success: Status code 200 is in the accepted range: 200,404 00:00:10.898 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:25.793 [Pipeline] sh 00:00:26.076 + tar --no-same-owner -xf spdk_2b14ffc3496d421004d230421561168eba4bac58.tar.gz 00:00:28.616 [Pipeline] sh 00:00:28.897 + git -C spdk log --oneline -n5 00:00:28.897 2b14ffc34 nvmf: method for getting DH-HMAC-CHAP keys 00:00:28.897 091d58775 nvme: make spdk_nvme_dhchap_calculate() public 00:00:28.897 2c8f92576 nvmf/auth: send DH-HMAC-CHAP_challenge message 00:00:28.897 c06b0c79b nvmf: make allow_any_host its own byte 00:00:28.897 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:00:28.910 [Pipeline] } 00:00:28.926 [Pipeline] // stage 00:00:28.935 [Pipeline] stage 00:00:28.937 [Pipeline] { (Prepare) 00:00:28.956 [Pipeline] writeFile 00:00:28.973 [Pipeline] sh 00:00:29.254 + logger -p user.info -t JENKINS-CI 00:00:29.267 [Pipeline] sh 00:00:29.548 + logger -p user.info -t JENKINS-CI 00:00:29.561 [Pipeline] sh 00:00:29.842 + cat autorun-spdk.conf 00:00:29.842 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.842 SPDK_TEST_NVMF=1 00:00:29.842 SPDK_TEST_NVME_CLI=1 00:00:29.842 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.842 SPDK_TEST_NVMF_NICS=e810 00:00:29.842 SPDK_TEST_VFIOUSER=1 00:00:29.842 SPDK_RUN_UBSAN=1 00:00:29.842 NET_TYPE=phy 00:00:29.849 RUN_NIGHTLY=0 00:00:29.854 [Pipeline] readFile 00:00:29.877 [Pipeline] withEnv 00:00:29.879 [Pipeline] { 00:00:29.893 [Pipeline] sh 00:00:30.180 + set -ex 00:00:30.180 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:30.180 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:30.180 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.180 ++ SPDK_TEST_NVMF=1 00:00:30.180 ++ SPDK_TEST_NVME_CLI=1 00:00:30.180 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.180 ++ SPDK_TEST_NVMF_NICS=e810 00:00:30.180 ++ SPDK_TEST_VFIOUSER=1 00:00:30.180 ++ SPDK_RUN_UBSAN=1 00:00:30.180 ++ NET_TYPE=phy 00:00:30.180 ++ RUN_NIGHTLY=0 00:00:30.180 + case $SPDK_TEST_NVMF_NICS in 00:00:30.180 + DRIVERS=ice 00:00:30.180 + [[ tcp == \r\d\m\a ]] 00:00:30.180 + [[ -n ice ]] 00:00:30.180 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:30.180 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:30.180 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:30.180 rmmod: ERROR: Module irdma is not currently loaded 00:00:30.180 rmmod: ERROR: Module i40iw is not currently loaded 00:00:30.180 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:30.180 + true 00:00:30.180 + for D in $DRIVERS 00:00:30.180 + sudo modprobe ice 00:00:30.180 + exit 0 00:00:30.219 [Pipeline] } 00:00:30.236 [Pipeline] // withEnv 00:00:30.241 [Pipeline] } 00:00:30.259 [Pipeline] // stage 00:00:30.269 [Pipeline] catchError 00:00:30.270 [Pipeline] { 00:00:30.286 [Pipeline] timeout 00:00:30.286 Timeout set to expire in 40 min 00:00:30.288 [Pipeline] { 00:00:30.304 [Pipeline] stage 00:00:30.306 [Pipeline] { (Tests) 00:00:30.320 [Pipeline] sh 00:00:30.602 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.602 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.602 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.602 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:30.602 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.602 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:30.602 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:30.602 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:30.603 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:30.603 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:30.603 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.603 + source /etc/os-release 00:00:30.603 ++ NAME='Fedora Linux' 00:00:30.603 ++ VERSION='38 (Cloud Edition)' 00:00:30.603 ++ ID=fedora 00:00:30.603 ++ VERSION_ID=38 00:00:30.603 ++ VERSION_CODENAME= 00:00:30.603 ++ PLATFORM_ID=platform:f38 00:00:30.603 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:30.603 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:30.603 ++ LOGO=fedora-logo-icon 00:00:30.603 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:30.603 ++ HOME_URL=https://fedoraproject.org/ 00:00:30.603 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:30.603 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:30.603 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:30.603 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:30.603 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:30.603 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:30.603 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:30.603 ++ SUPPORT_END=2024-05-14 00:00:30.603 ++ VARIANT='Cloud Edition' 00:00:30.603 ++ VARIANT_ID=cloud 00:00:30.603 + uname -a 00:00:30.603 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:30.603 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.135 Hugepages 00:00:33.135 node hugesize free / total 00:00:33.135 node0 1048576kB 0 / 0 00:00:33.135 node0 2048kB 0 / 0 00:00:33.135 node1 1048576kB 0 / 0 00:00:33.135 node1 2048kB 0 / 0 00:00:33.135 00:00:33.135 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.135 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:33.135 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:33.135 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:33.135 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:33.135 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:33.135 + rm -f /tmp/spdk-ld-path 00:00:33.135 + source autorun-spdk.conf 00:00:33.135 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.135 ++ SPDK_TEST_NVMF=1 00:00:33.135 ++ SPDK_TEST_NVME_CLI=1 00:00:33.135 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.135 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.135 ++ SPDK_TEST_VFIOUSER=1 00:00:33.135 ++ SPDK_RUN_UBSAN=1 00:00:33.135 ++ NET_TYPE=phy 00:00:33.135 ++ RUN_NIGHTLY=0 00:00:33.135 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.135 + [[ -n '' ]] 00:00:33.135 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.135 + for M in /var/spdk/build-*-manifest.txt 00:00:33.135 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.135 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.135 + for M in /var/spdk/build-*-manifest.txt 00:00:33.135 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.135 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.135 ++ uname 00:00:33.135 + [[ Linux == \L\i\n\u\x ]] 00:00:33.135 + sudo dmesg -T 00:00:33.135 + sudo dmesg --clear 00:00:33.135 + dmesg_pid=750303 00:00:33.135 + sudo dmesg -Tw 00:00:33.135 + [[ Fedora Linux == FreeBSD ]] 00:00:33.135 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.135 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.135 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.135 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.135 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.135 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.135 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.135 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.135 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.135 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.135 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.135 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.135 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.135 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.135 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.135 Test configuration: 00:00:33.135 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.135 SPDK_TEST_NVMF=1 00:00:33.135 SPDK_TEST_NVME_CLI=1 00:00:33.135 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.135 SPDK_TEST_NVMF_NICS=e810 00:00:33.135 SPDK_TEST_VFIOUSER=1 00:00:33.135 SPDK_RUN_UBSAN=1 00:00:33.135 NET_TYPE=phy 00:00:33.394 RUN_NIGHTLY=0 02:55:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.394 02:55:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.394 02:55:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.394 02:55:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.394 02:55:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.394 02:55:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.394 02:55:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.394 02:55:04 -- paths/export.sh@5 -- $ export PATH 00:00:33.394 02:55:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.394 02:55:04 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.394 02:55:04 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:33.394 02:55:04 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715734504.XXXXXX 00:00:33.394 02:55:04 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715734504.mMyna3 00:00:33.394 02:55:04 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:33.394 02:55:04 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:33.394 02:55:04 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.394 02:55:04 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.394 02:55:04 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.394 02:55:04 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:33.394 02:55:04 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:33.394 02:55:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.394 02:55:04 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.394 02:55:04 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:33.394 02:55:04 -- pm/common@17 -- $ local monitor 00:00:33.394 02:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.394 02:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.394 02:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.394 02:55:04 -- pm/common@21 -- $ date +%s 00:00:33.394 02:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.394 02:55:04 -- pm/common@21 -- $ date +%s 00:00:33.394 02:55:04 -- pm/common@25 -- $ sleep 1 00:00:33.394 02:55:04 -- pm/common@21 -- $ date +%s 00:00:33.394 02:55:04 -- pm/common@21 -- $ date +%s 00:00:33.394 02:55:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734504 00:00:33.394 02:55:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734504 00:00:33.394 02:55:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734504 00:00:33.394 02:55:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715734504 00:00:33.394 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734504_collect-vmstat.pm.log 00:00:33.394 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734504_collect-cpu-load.pm.log 00:00:33.394 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734504_collect-cpu-temp.pm.log 00:00:33.394 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715734504_collect-bmc-pm.bmc.pm.log 00:00:34.329 02:55:05 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:34.329 02:55:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.329 02:55:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.329 02:55:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.329 02:55:05 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.329 Wed May 15 12:55:05 AM UTC 2024 00:00:34.329 02:55:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.329 v24.05-pre-627-g2b14ffc34 00:00:34.329 02:55:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.329 02:55:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.329 02:55:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.329 02:55:05 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:34.329 02:55:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:34.329 02:55:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.329 ************************************ 00:00:34.329 START TEST ubsan 00:00:34.329 ************************************ 00:00:34.329 02:55:05 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:34.329 using ubsan 00:00:34.329 00:00:34.329 real 0m0.000s 00:00:34.329 user 0m0.000s 00:00:34.329 sys 0m0.000s 00:00:34.329 02:55:05 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:34.329 02:55:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.329 ************************************ 00:00:34.329 END TEST ubsan 00:00:34.329 ************************************ 00:00:34.329 02:55:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.329 02:55:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.329 02:55:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.329 02:55:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.588 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.588 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.847 Using 'verbs' RDMA provider 00:00:47.625 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:59.837 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:59.837 Creating mk/config.mk...done. 00:00:59.837 Creating mk/cc.flags.mk...done. 00:00:59.837 Type 'make' to build. 00:00:59.837 02:55:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:00:59.837 02:55:29 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:59.837 02:55:29 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:59.837 02:55:29 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.837 ************************************ 00:00:59.837 START TEST make 00:00:59.837 ************************************ 00:00:59.837 02:55:29 make -- common/autotest_common.sh@1121 -- $ make -j96 00:00:59.837 make[1]: Nothing to be done for 'all'. 00:01:00.096 The Meson build system 00:01:00.096 Version: 1.3.1 00:01:00.096 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:00.096 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.096 Build type: native build 00:01:00.096 Project name: libvfio-user 00:01:00.096 Project version: 0.0.1 00:01:00.096 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:00.096 C linker for the host machine: cc ld.bfd 2.39-16 00:01:00.096 Host machine cpu family: x86_64 00:01:00.096 Host machine cpu: x86_64 00:01:00.096 Run-time dependency threads found: YES 00:01:00.096 Library dl found: YES 00:01:00.096 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:00.096 Run-time dependency json-c found: YES 0.17 00:01:00.096 Run-time dependency cmocka found: YES 1.1.7 00:01:00.096 Program pytest-3 found: NO 00:01:00.096 Program flake8 found: NO 00:01:00.096 Program misspell-fixer found: NO 00:01:00.096 Program restructuredtext-lint found: NO 00:01:00.096 Program valgrind found: YES (/usr/bin/valgrind) 00:01:00.096 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.096 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.096 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.096 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.096 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:00.096 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:00.096 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.096 Build targets in project: 8 00:01:00.096 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:00.096 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:00.096 00:01:00.096 libvfio-user 0.0.1 00:01:00.096 00:01:00.096 User defined options 00:01:00.096 buildtype : debug 00:01:00.096 default_library: shared 00:01:00.096 libdir : /usr/local/lib 00:01:00.096 00:01:00.096 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.662 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.920 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:00.920 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:00.920 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:00.920 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:00.920 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:00.920 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:00.920 [7/37] Compiling C object samples/null.p/null.c.o 00:01:00.920 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:00.920 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:00.920 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:00.920 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:00.920 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:00.920 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:00.920 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:00.920 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:00.920 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:00.920 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:00.920 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:00.920 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:00.920 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:00.920 [21/37] Compiling C object samples/server.p/server.c.o 00:01:00.920 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:00.920 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:00.920 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:00.920 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:00.920 [26/37] Compiling C object samples/client.p/client.c.o 00:01:00.920 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:00.920 [28/37] Linking target samples/client 00:01:00.920 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:00.920 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:00.920 [31/37] Linking target test/unit_tests 00:01:01.178 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:01.178 [33/37] Linking target samples/server 00:01:01.178 [34/37] Linking target samples/lspci 00:01:01.178 [35/37] Linking target samples/gpio-pci-idio-16 00:01:01.178 [36/37] Linking target samples/null 00:01:01.178 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:01.178 INFO: autodetecting backend as ninja 00:01:01.178 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.178 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.437 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.437 ninja: no work to do. 00:01:06.704 The Meson build system 00:01:06.704 Version: 1.3.1 00:01:06.704 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.704 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.704 Build type: native build 00:01:06.704 Program cat found: YES (/usr/bin/cat) 00:01:06.704 Project name: DPDK 00:01:06.704 Project version: 23.11.0 00:01:06.704 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.704 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.704 Host machine cpu family: x86_64 00:01:06.704 Host machine cpu: x86_64 00:01:06.704 Message: ## Building in Developer Mode ## 00:01:06.704 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.704 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.704 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.704 Program python3 found: YES (/usr/bin/python3) 00:01:06.704 Program cat found: YES (/usr/bin/cat) 00:01:06.704 Compiler for C supports arguments -march=native: YES 00:01:06.704 Checking for size of "void *" : 8 00:01:06.704 Checking for size of "void *" : 8 (cached) 00:01:06.704 Library m found: YES 00:01:06.704 Library numa found: YES 00:01:06.704 Has header "numaif.h" : YES 00:01:06.704 Library fdt found: NO 00:01:06.704 Library execinfo found: NO 00:01:06.704 Has header "execinfo.h" : YES 00:01:06.704 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.704 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.704 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.704 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.704 Run-time dependency openssl found: YES 3.0.9 00:01:06.704 Run-time dependency libpcap found: YES 1.10.4 00:01:06.704 Has header "pcap.h" with dependency libpcap: YES 00:01:06.704 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.704 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.704 Compiler for C supports arguments -Wformat: YES 00:01:06.704 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.704 Compiler for C supports arguments -Wformat-security: NO 00:01:06.704 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.704 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.704 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.704 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.704 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.704 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.704 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.704 Compiler for C supports arguments -Wundef: YES 00:01:06.704 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.704 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.704 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.704 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.704 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.704 Program objdump found: YES (/usr/bin/objdump) 00:01:06.704 Compiler for C supports arguments -mavx512f: YES 00:01:06.704 Checking if "AVX512 checking" compiles: YES 00:01:06.704 Fetching value of define "__SSE4_2__" : 1 00:01:06.704 Fetching value of define "__AES__" : 1 00:01:06.704 Fetching value of define "__AVX__" : 1 00:01:06.704 Fetching value of define "__AVX2__" : 1 00:01:06.704 Fetching value of define "__AVX512BW__" : 1 00:01:06.704 Fetching value of define "__AVX512CD__" : 1 00:01:06.704 Fetching value of define "__AVX512DQ__" : 1 00:01:06.704 Fetching value of define "__AVX512F__" : 1 00:01:06.704 Fetching value of define "__AVX512VL__" : 1 00:01:06.704 Fetching value of define "__PCLMUL__" : 1 00:01:06.704 Fetching value of define "__RDRND__" : 1 00:01:06.704 Fetching value of define "__RDSEED__" : 1 00:01:06.704 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.704 Fetching value of define "__znver1__" : (undefined) 00:01:06.704 Fetching value of define "__znver2__" : (undefined) 00:01:06.704 Fetching value of define "__znver3__" : (undefined) 00:01:06.704 Fetching value of define "__znver4__" : (undefined) 00:01:06.704 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.704 Message: lib/log: Defining dependency "log" 00:01:06.704 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.704 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.704 Checking for function "getentropy" : NO 00:01:06.704 Message: lib/eal: Defining dependency "eal" 00:01:06.704 Message: lib/ring: Defining dependency "ring" 00:01:06.704 Message: lib/rcu: Defining dependency "rcu" 00:01:06.704 Message: lib/mempool: Defining dependency "mempool" 00:01:06.704 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.704 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.704 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.704 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.704 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.704 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.704 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:06.704 Compiler for C supports arguments -mpclmul: YES 00:01:06.704 Compiler for C supports arguments -maes: YES 00:01:06.704 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.704 Compiler for C supports arguments -mavx512bw: YES 00:01:06.704 Compiler for C supports arguments -mavx512dq: YES 00:01:06.704 Compiler for C supports arguments -mavx512vl: YES 00:01:06.704 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.704 Compiler for C supports arguments -mavx2: YES 00:01:06.704 Compiler for C supports arguments -mavx: YES 00:01:06.704 Message: lib/net: Defining dependency "net" 00:01:06.704 Message: lib/meter: Defining dependency "meter" 00:01:06.705 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.705 Message: lib/pci: Defining dependency "pci" 00:01:06.705 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.705 Message: lib/hash: Defining dependency "hash" 00:01:06.705 Message: lib/timer: Defining dependency "timer" 00:01:06.705 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.705 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.705 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.705 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.705 Message: lib/power: Defining dependency "power" 00:01:06.705 Message: lib/reorder: Defining dependency "reorder" 00:01:06.705 Message: lib/security: Defining dependency "security" 00:01:06.705 Has header "linux/userfaultfd.h" : YES 00:01:06.705 Has header "linux/vduse.h" : YES 00:01:06.705 Message: lib/vhost: Defining dependency "vhost" 00:01:06.705 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.705 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.705 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.705 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.705 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.705 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.705 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.705 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.705 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.705 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.705 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.705 Configuring doxy-api-html.conf using configuration 00:01:06.705 Configuring doxy-api-man.conf using configuration 00:01:06.705 Program mandb found: YES (/usr/bin/mandb) 00:01:06.705 Program sphinx-build found: NO 00:01:06.705 Configuring rte_build_config.h using configuration 00:01:06.705 Message: 00:01:06.705 ================= 00:01:06.705 Applications Enabled 00:01:06.705 ================= 00:01:06.705 00:01:06.705 apps: 00:01:06.705 00:01:06.705 00:01:06.705 Message: 00:01:06.705 ================= 00:01:06.705 Libraries Enabled 00:01:06.705 ================= 00:01:06.705 00:01:06.705 libs: 00:01:06.705 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.705 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.705 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.705 00:01:06.705 Message: 00:01:06.705 =============== 00:01:06.705 Drivers Enabled 00:01:06.705 =============== 00:01:06.705 00:01:06.705 common: 00:01:06.705 00:01:06.705 bus: 00:01:06.705 pci, vdev, 00:01:06.705 mempool: 00:01:06.705 ring, 00:01:06.705 dma: 00:01:06.705 00:01:06.705 net: 00:01:06.705 00:01:06.705 crypto: 00:01:06.705 00:01:06.705 compress: 00:01:06.705 00:01:06.705 vdpa: 00:01:06.705 00:01:06.705 00:01:06.705 Message: 00:01:06.705 ================= 00:01:06.705 Content Skipped 00:01:06.705 ================= 00:01:06.705 00:01:06.705 apps: 00:01:06.705 dumpcap: explicitly disabled via build config 00:01:06.705 graph: explicitly disabled via build config 00:01:06.705 pdump: explicitly disabled via build config 00:01:06.705 proc-info: explicitly disabled via build config 00:01:06.705 test-acl: explicitly disabled via build config 00:01:06.705 test-bbdev: explicitly disabled via build config 00:01:06.705 test-cmdline: explicitly disabled via build config 00:01:06.705 test-compress-perf: explicitly disabled via build config 00:01:06.705 test-crypto-perf: explicitly disabled via build config 00:01:06.705 test-dma-perf: explicitly disabled via build config 00:01:06.705 test-eventdev: explicitly disabled via build config 00:01:06.705 test-fib: explicitly disabled via build config 00:01:06.705 test-flow-perf: explicitly disabled via build config 00:01:06.705 test-gpudev: explicitly disabled via build config 00:01:06.705 test-mldev: explicitly disabled via build config 00:01:06.705 test-pipeline: explicitly disabled via build config 00:01:06.705 test-pmd: explicitly disabled via build config 00:01:06.705 test-regex: explicitly disabled via build config 00:01:06.705 test-sad: explicitly disabled via build config 00:01:06.705 test-security-perf: explicitly disabled via build config 00:01:06.705 00:01:06.705 libs: 00:01:06.705 metrics: explicitly disabled via build config 00:01:06.705 acl: explicitly disabled via build config 00:01:06.705 bbdev: explicitly disabled via build config 00:01:06.705 bitratestats: explicitly disabled via build config 00:01:06.705 bpf: explicitly disabled via build config 00:01:06.705 cfgfile: explicitly disabled via build config 00:01:06.705 distributor: explicitly disabled via build config 00:01:06.705 efd: explicitly disabled via build config 00:01:06.705 eventdev: explicitly disabled via build config 00:01:06.705 dispatcher: explicitly disabled via build config 00:01:06.705 gpudev: explicitly disabled via build config 00:01:06.705 gro: explicitly disabled via build config 00:01:06.705 gso: explicitly disabled via build config 00:01:06.705 ip_frag: explicitly disabled via build config 00:01:06.705 jobstats: explicitly disabled via build config 00:01:06.705 latencystats: explicitly disabled via build config 00:01:06.705 lpm: explicitly disabled via build config 00:01:06.705 member: explicitly disabled via build config 00:01:06.705 pcapng: explicitly disabled via build config 00:01:06.705 rawdev: explicitly disabled via build config 00:01:06.705 regexdev: explicitly disabled via build config 00:01:06.705 mldev: explicitly disabled via build config 00:01:06.705 rib: explicitly disabled via build config 00:01:06.705 sched: explicitly disabled via build config 00:01:06.705 stack: explicitly disabled via build config 00:01:06.705 ipsec: explicitly disabled via build config 00:01:06.705 pdcp: explicitly disabled via build config 00:01:06.705 fib: explicitly disabled via build config 00:01:06.705 port: explicitly disabled via build config 00:01:06.705 pdump: explicitly disabled via build config 00:01:06.705 table: explicitly disabled via build config 00:01:06.705 pipeline: explicitly disabled via build config 00:01:06.705 graph: explicitly disabled via build config 00:01:06.705 node: explicitly disabled via build config 00:01:06.705 00:01:06.705 drivers: 00:01:06.705 common/cpt: not in enabled drivers build config 00:01:06.705 common/dpaax: not in enabled drivers build config 00:01:06.705 common/iavf: not in enabled drivers build config 00:01:06.705 common/idpf: not in enabled drivers build config 00:01:06.705 common/mvep: not in enabled drivers build config 00:01:06.705 common/octeontx: not in enabled drivers build config 00:01:06.705 bus/auxiliary: not in enabled drivers build config 00:01:06.705 bus/cdx: not in enabled drivers build config 00:01:06.705 bus/dpaa: not in enabled drivers build config 00:01:06.705 bus/fslmc: not in enabled drivers build config 00:01:06.705 bus/ifpga: not in enabled drivers build config 00:01:06.705 bus/platform: not in enabled drivers build config 00:01:06.705 bus/vmbus: not in enabled drivers build config 00:01:06.705 common/cnxk: not in enabled drivers build config 00:01:06.705 common/mlx5: not in enabled drivers build config 00:01:06.705 common/nfp: not in enabled drivers build config 00:01:06.705 common/qat: not in enabled drivers build config 00:01:06.705 common/sfc_efx: not in enabled drivers build config 00:01:06.705 mempool/bucket: not in enabled drivers build config 00:01:06.705 mempool/cnxk: not in enabled drivers build config 00:01:06.705 mempool/dpaa: not in enabled drivers build config 00:01:06.705 mempool/dpaa2: not in enabled drivers build config 00:01:06.705 mempool/octeontx: not in enabled drivers build config 00:01:06.705 mempool/stack: not in enabled drivers build config 00:01:06.705 dma/cnxk: not in enabled drivers build config 00:01:06.705 dma/dpaa: not in enabled drivers build config 00:01:06.705 dma/dpaa2: not in enabled drivers build config 00:01:06.705 dma/hisilicon: not in enabled drivers build config 00:01:06.705 dma/idxd: not in enabled drivers build config 00:01:06.705 dma/ioat: not in enabled drivers build config 00:01:06.705 dma/skeleton: not in enabled drivers build config 00:01:06.705 net/af_packet: not in enabled drivers build config 00:01:06.705 net/af_xdp: not in enabled drivers build config 00:01:06.705 net/ark: not in enabled drivers build config 00:01:06.705 net/atlantic: not in enabled drivers build config 00:01:06.705 net/avp: not in enabled drivers build config 00:01:06.705 net/axgbe: not in enabled drivers build config 00:01:06.705 net/bnx2x: not in enabled drivers build config 00:01:06.705 net/bnxt: not in enabled drivers build config 00:01:06.705 net/bonding: not in enabled drivers build config 00:01:06.705 net/cnxk: not in enabled drivers build config 00:01:06.705 net/cpfl: not in enabled drivers build config 00:01:06.705 net/cxgbe: not in enabled drivers build config 00:01:06.705 net/dpaa: not in enabled drivers build config 00:01:06.705 net/dpaa2: not in enabled drivers build config 00:01:06.705 net/e1000: not in enabled drivers build config 00:01:06.705 net/ena: not in enabled drivers build config 00:01:06.705 net/enetc: not in enabled drivers build config 00:01:06.705 net/enetfec: not in enabled drivers build config 00:01:06.705 net/enic: not in enabled drivers build config 00:01:06.705 net/failsafe: not in enabled drivers build config 00:01:06.705 net/fm10k: not in enabled drivers build config 00:01:06.705 net/gve: not in enabled drivers build config 00:01:06.705 net/hinic: not in enabled drivers build config 00:01:06.705 net/hns3: not in enabled drivers build config 00:01:06.705 net/i40e: not in enabled drivers build config 00:01:06.705 net/iavf: not in enabled drivers build config 00:01:06.705 net/ice: not in enabled drivers build config 00:01:06.705 net/idpf: not in enabled drivers build config 00:01:06.705 net/igc: not in enabled drivers build config 00:01:06.705 net/ionic: not in enabled drivers build config 00:01:06.705 net/ipn3ke: not in enabled drivers build config 00:01:06.705 net/ixgbe: not in enabled drivers build config 00:01:06.705 net/mana: not in enabled drivers build config 00:01:06.705 net/memif: not in enabled drivers build config 00:01:06.705 net/mlx4: not in enabled drivers build config 00:01:06.705 net/mlx5: not in enabled drivers build config 00:01:06.705 net/mvneta: not in enabled drivers build config 00:01:06.705 net/mvpp2: not in enabled drivers build config 00:01:06.705 net/netvsc: not in enabled drivers build config 00:01:06.705 net/nfb: not in enabled drivers build config 00:01:06.705 net/nfp: not in enabled drivers build config 00:01:06.705 net/ngbe: not in enabled drivers build config 00:01:06.705 net/null: not in enabled drivers build config 00:01:06.705 net/octeontx: not in enabled drivers build config 00:01:06.705 net/octeon_ep: not in enabled drivers build config 00:01:06.706 net/pcap: not in enabled drivers build config 00:01:06.706 net/pfe: not in enabled drivers build config 00:01:06.706 net/qede: not in enabled drivers build config 00:01:06.706 net/ring: not in enabled drivers build config 00:01:06.706 net/sfc: not in enabled drivers build config 00:01:06.706 net/softnic: not in enabled drivers build config 00:01:06.706 net/tap: not in enabled drivers build config 00:01:06.706 net/thunderx: not in enabled drivers build config 00:01:06.706 net/txgbe: not in enabled drivers build config 00:01:06.706 net/vdev_netvsc: not in enabled drivers build config 00:01:06.706 net/vhost: not in enabled drivers build config 00:01:06.706 net/virtio: not in enabled drivers build config 00:01:06.706 net/vmxnet3: not in enabled drivers build config 00:01:06.706 raw/*: missing internal dependency, "rawdev" 00:01:06.706 crypto/armv8: not in enabled drivers build config 00:01:06.706 crypto/bcmfs: not in enabled drivers build config 00:01:06.706 crypto/caam_jr: not in enabled drivers build config 00:01:06.706 crypto/ccp: not in enabled drivers build config 00:01:06.706 crypto/cnxk: not in enabled drivers build config 00:01:06.706 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.706 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.706 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.706 crypto/mlx5: not in enabled drivers build config 00:01:06.706 crypto/mvsam: not in enabled drivers build config 00:01:06.706 crypto/nitrox: not in enabled drivers build config 00:01:06.706 crypto/null: not in enabled drivers build config 00:01:06.706 crypto/octeontx: not in enabled drivers build config 00:01:06.706 crypto/openssl: not in enabled drivers build config 00:01:06.706 crypto/scheduler: not in enabled drivers build config 00:01:06.706 crypto/uadk: not in enabled drivers build config 00:01:06.706 crypto/virtio: not in enabled drivers build config 00:01:06.706 compress/isal: not in enabled drivers build config 00:01:06.706 compress/mlx5: not in enabled drivers build config 00:01:06.706 compress/octeontx: not in enabled drivers build config 00:01:06.706 compress/zlib: not in enabled drivers build config 00:01:06.706 regex/*: missing internal dependency, "regexdev" 00:01:06.706 ml/*: missing internal dependency, "mldev" 00:01:06.706 vdpa/ifc: not in enabled drivers build config 00:01:06.706 vdpa/mlx5: not in enabled drivers build config 00:01:06.706 vdpa/nfp: not in enabled drivers build config 00:01:06.706 vdpa/sfc: not in enabled drivers build config 00:01:06.706 event/*: missing internal dependency, "eventdev" 00:01:06.706 baseband/*: missing internal dependency, "bbdev" 00:01:06.706 gpu/*: missing internal dependency, "gpudev" 00:01:06.706 00:01:06.706 00:01:06.706 Build targets in project: 85 00:01:06.706 00:01:06.706 DPDK 23.11.0 00:01:06.706 00:01:06.706 User defined options 00:01:06.706 buildtype : debug 00:01:06.706 default_library : shared 00:01:06.706 libdir : lib 00:01:06.706 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.706 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.706 c_link_args : 00:01:06.706 cpu_instruction_set: native 00:01:06.706 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:06.706 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:06.706 enable_docs : false 00:01:06.706 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.706 enable_kmods : false 00:01:06.706 tests : false 00:01:06.706 00:01:06.706 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.972 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:06.972 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.232 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.232 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.232 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.232 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.232 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.232 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.232 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.232 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.232 [10/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.232 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.232 [12/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.232 [13/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:07.232 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.232 [15/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.232 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.232 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.232 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.232 [19/265] Linking static target lib/librte_kvargs.a 00:01:07.232 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.232 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.232 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.232 [23/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.232 [24/265] Linking static target lib/librte_log.a 00:01:07.232 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:07.232 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.232 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.232 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.494 [29/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.494 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.494 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.494 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.494 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.494 [34/265] Linking static target lib/librte_pci.a 00:01:07.494 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.494 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.494 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.494 [38/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:07.494 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.494 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.494 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.494 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.494 [43/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:07.494 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.757 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.757 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.757 [47/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.757 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.757 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.757 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:07.757 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.757 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.757 [53/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.757 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.757 [55/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.757 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.757 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.757 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.757 [59/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.757 [60/265] Linking static target lib/librte_ring.a 00:01:07.757 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.757 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.757 [63/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.757 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.757 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.757 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.757 [67/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.757 [68/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.757 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.757 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.757 [71/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.757 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.757 [73/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:07.757 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.757 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.757 [76/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.757 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.757 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.757 [79/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:07.757 [80/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.757 [81/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.757 [82/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:07.757 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.757 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.757 [85/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.757 [86/265] Linking static target lib/librte_meter.a 00:01:07.757 [87/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:07.757 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.757 [89/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.757 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.757 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.757 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.757 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.757 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.757 [95/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:07.757 [96/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.757 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.757 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.757 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.757 [100/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.757 [101/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.757 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.757 [103/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.757 [104/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.757 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.757 [106/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.757 [107/265] Linking static target lib/librte_telemetry.a 00:01:07.757 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.757 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:07.757 [110/265] Linking static target lib/librte_cmdline.a 00:01:07.757 [111/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.757 [112/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.757 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.757 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.757 [115/265] Linking static target lib/librte_mempool.a 00:01:07.757 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.757 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.757 [118/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:07.757 [119/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.757 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:07.757 [121/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.757 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.757 [123/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.757 [124/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.757 [125/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:07.757 [126/265] Linking static target lib/librte_net.a 00:01:08.016 [127/265] Linking static target lib/librte_eal.a 00:01:08.016 [128/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.016 [129/265] Linking static target lib/librte_rcu.a 00:01:08.016 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:08.016 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.016 [132/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:08.016 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:08.016 [134/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.016 [135/265] Linking static target lib/librte_timer.a 00:01:08.016 [136/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.016 [137/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.016 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:08.016 [139/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [140/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [141/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [142/265] Linking target lib/librte_log.so.24.0 00:01:08.016 [143/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:08.016 [144/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.016 [145/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.016 [146/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.016 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.016 [148/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:08.016 [149/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:08.016 [150/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.016 [151/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.016 [152/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.016 [153/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.016 [154/265] Linking static target lib/librte_mbuf.a 00:01:08.016 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.016 [156/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.016 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:08.016 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.016 [159/265] Linking static target lib/librte_compressdev.a 00:01:08.016 [160/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.016 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.016 [162/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.016 [163/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:08.016 [164/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.016 [165/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.275 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.275 [167/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.275 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.275 [169/265] Linking static target lib/librte_power.a 00:01:08.275 [170/265] Linking target lib/librte_kvargs.so.24.0 00:01:08.275 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.275 [172/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.275 [173/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.275 [174/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.275 [175/265] Linking static target lib/librte_security.a 00:01:08.275 [176/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:08.275 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.275 [178/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.275 [179/265] Linking static target lib/librte_reorder.a 00:01:08.275 [180/265] Linking static target lib/librte_dmadev.a 00:01:08.275 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.275 [182/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.275 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.275 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.275 [185/265] Linking static target lib/librte_hash.a 00:01:08.275 [186/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:08.275 [187/265] Linking target lib/librte_telemetry.so.24.0 00:01:08.275 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.275 [189/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.275 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.275 [191/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:08.275 [192/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.275 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.275 [194/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.275 [195/265] Linking static target drivers/librte_bus_vdev.a 00:01:08.275 [196/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.275 [197/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.275 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.541 [199/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:08.541 [200/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.541 [201/265] Linking static target lib/librte_cryptodev.a 00:01:08.541 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.541 [203/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.541 [204/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.541 [205/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.541 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.541 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.541 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:08.541 [209/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.541 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.541 [211/265] Linking static target drivers/librte_mempool_ring.a 00:01:08.541 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.541 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.845 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [216/265] Linking static target lib/librte_ethdev.a 00:01:08.845 [217/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [220/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.845 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.103 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.103 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:09.103 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.040 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:10.040 [226/265] Linking static target lib/librte_vhost.a 00:01:10.300 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.681 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.884 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.794 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.794 [231/265] Linking target lib/librte_eal.so.24.0 00:01:17.794 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:17.794 [233/265] Linking target lib/librte_ring.so.24.0 00:01:17.794 [234/265] Linking target lib/librte_pci.so.24.0 00:01:17.794 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:17.794 [236/265] Linking target lib/librte_meter.so.24.0 00:01:17.794 [237/265] Linking target lib/librte_timer.so.24.0 00:01:17.794 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:17.794 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:17.794 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:17.794 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:17.794 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:17.794 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:17.794 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:17.794 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:17.794 [246/265] Linking target lib/librte_mempool.so.24.0 00:01:18.053 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:18.053 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:18.053 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:18.053 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:18.053 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:18.053 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:18.053 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:18.053 [254/265] Linking target lib/librte_net.so.24.0 00:01:18.053 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:18.313 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:18.313 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:18.313 [258/265] Linking target lib/librte_hash.so.24.0 00:01:18.313 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:18.313 [260/265] Linking target lib/librte_security.so.24.0 00:01:18.313 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:18.573 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:18.573 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:18.573 [264/265] Linking target lib/librte_power.so.24.0 00:01:18.573 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:18.573 INFO: autodetecting backend as ninja 00:01:18.573 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:19.517 CC lib/log/log.o 00:01:19.517 CC lib/log/log_flags.o 00:01:19.517 CC lib/log/log_deprecated.o 00:01:19.517 CC lib/ut/ut.o 00:01:19.517 CC lib/ut_mock/mock.o 00:01:19.777 LIB libspdk_ut_mock.a 00:01:19.777 LIB libspdk_log.a 00:01:19.777 LIB libspdk_ut.a 00:01:19.777 SO libspdk_ut_mock.so.6.0 00:01:19.777 SO libspdk_log.so.7.0 00:01:19.777 SO libspdk_ut.so.2.0 00:01:19.777 SYMLINK libspdk_ut_mock.so 00:01:19.777 SYMLINK libspdk_log.so 00:01:19.777 SYMLINK libspdk_ut.so 00:01:20.035 CC lib/util/base64.o 00:01:20.035 CC lib/util/bit_array.o 00:01:20.035 CC lib/util/cpuset.o 00:01:20.035 CC lib/util/crc16.o 00:01:20.035 CC lib/util/crc32.o 00:01:20.035 CC lib/util/crc32c.o 00:01:20.035 CC lib/util/crc32_ieee.o 00:01:20.035 CC lib/util/crc64.o 00:01:20.035 CC lib/util/dif.o 00:01:20.035 CC lib/util/fd.o 00:01:20.035 CC lib/util/iov.o 00:01:20.035 CC lib/util/file.o 00:01:20.035 CC lib/util/hexlify.o 00:01:20.035 CC lib/util/math.o 00:01:20.035 CXX lib/trace_parser/trace.o 00:01:20.035 CC lib/util/string.o 00:01:20.035 CC lib/util/pipe.o 00:01:20.035 CC lib/util/strerror_tls.o 00:01:20.035 CC lib/util/uuid.o 00:01:20.035 CC lib/util/fd_group.o 00:01:20.035 CC lib/util/xor.o 00:01:20.035 CC lib/util/zipf.o 00:01:20.035 CC lib/dma/dma.o 00:01:20.035 CC lib/ioat/ioat.o 00:01:20.294 CC lib/vfio_user/host/vfio_user_pci.o 00:01:20.294 CC lib/vfio_user/host/vfio_user.o 00:01:20.294 LIB libspdk_dma.a 00:01:20.294 SO libspdk_dma.so.4.0 00:01:20.294 LIB libspdk_ioat.a 00:01:20.294 SYMLINK libspdk_dma.so 00:01:20.294 SO libspdk_ioat.so.7.0 00:01:20.552 LIB libspdk_vfio_user.a 00:01:20.552 SO libspdk_vfio_user.so.5.0 00:01:20.552 SYMLINK libspdk_ioat.so 00:01:20.552 LIB libspdk_util.a 00:01:20.552 SYMLINK libspdk_vfio_user.so 00:01:20.552 SO libspdk_util.so.9.0 00:01:20.552 SYMLINK libspdk_util.so 00:01:20.810 LIB libspdk_trace_parser.a 00:01:20.810 SO libspdk_trace_parser.so.5.0 00:01:20.810 SYMLINK libspdk_trace_parser.so 00:01:21.069 CC lib/vmd/vmd.o 00:01:21.069 CC lib/vmd/led.o 00:01:21.069 CC lib/idxd/idxd_user.o 00:01:21.069 CC lib/idxd/idxd.o 00:01:21.069 CC lib/rdma/common.o 00:01:21.069 CC lib/rdma/rdma_verbs.o 00:01:21.069 CC lib/conf/conf.o 00:01:21.069 CC lib/json/json_parse.o 00:01:21.069 CC lib/json/json_util.o 00:01:21.069 CC lib/json/json_write.o 00:01:21.069 CC lib/env_dpdk/env.o 00:01:21.069 CC lib/env_dpdk/memory.o 00:01:21.069 CC lib/env_dpdk/pci.o 00:01:21.069 CC lib/env_dpdk/init.o 00:01:21.069 CC lib/env_dpdk/threads.o 00:01:21.069 CC lib/env_dpdk/pci_ioat.o 00:01:21.069 CC lib/env_dpdk/pci_virtio.o 00:01:21.069 CC lib/env_dpdk/pci_vmd.o 00:01:21.069 CC lib/env_dpdk/pci_idxd.o 00:01:21.069 CC lib/env_dpdk/pci_event.o 00:01:21.069 CC lib/env_dpdk/sigbus_handler.o 00:01:21.069 CC lib/env_dpdk/pci_dpdk.o 00:01:21.069 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:21.069 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:21.327 LIB libspdk_conf.a 00:01:21.327 LIB libspdk_rdma.a 00:01:21.327 SO libspdk_conf.so.6.0 00:01:21.327 SO libspdk_rdma.so.6.0 00:01:21.327 LIB libspdk_json.a 00:01:21.327 SO libspdk_json.so.6.0 00:01:21.327 SYMLINK libspdk_conf.so 00:01:21.327 SYMLINK libspdk_rdma.so 00:01:21.327 SYMLINK libspdk_json.so 00:01:21.327 LIB libspdk_idxd.a 00:01:21.327 SO libspdk_idxd.so.12.0 00:01:21.327 LIB libspdk_vmd.a 00:01:21.585 SO libspdk_vmd.so.6.0 00:01:21.585 SYMLINK libspdk_idxd.so 00:01:21.585 SYMLINK libspdk_vmd.so 00:01:21.585 CC lib/jsonrpc/jsonrpc_server.o 00:01:21.585 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:21.585 CC lib/jsonrpc/jsonrpc_client.o 00:01:21.585 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:21.844 LIB libspdk_jsonrpc.a 00:01:21.844 SO libspdk_jsonrpc.so.6.0 00:01:21.844 LIB libspdk_env_dpdk.a 00:01:21.844 SYMLINK libspdk_jsonrpc.so 00:01:22.103 SO libspdk_env_dpdk.so.14.0 00:01:22.103 SYMLINK libspdk_env_dpdk.so 00:01:22.361 CC lib/rpc/rpc.o 00:01:22.361 LIB libspdk_rpc.a 00:01:22.361 SO libspdk_rpc.so.6.0 00:01:22.619 SYMLINK libspdk_rpc.so 00:01:22.876 CC lib/keyring/keyring.o 00:01:22.876 CC lib/keyring/keyring_rpc.o 00:01:22.876 CC lib/notify/notify.o 00:01:22.876 CC lib/notify/notify_rpc.o 00:01:22.876 CC lib/trace/trace.o 00:01:22.876 CC lib/trace/trace_flags.o 00:01:22.876 CC lib/trace/trace_rpc.o 00:01:22.876 LIB libspdk_notify.a 00:01:22.876 LIB libspdk_keyring.a 00:01:22.876 SO libspdk_notify.so.6.0 00:01:23.133 SO libspdk_keyring.so.1.0 00:01:23.133 LIB libspdk_trace.a 00:01:23.133 SYMLINK libspdk_notify.so 00:01:23.133 SO libspdk_trace.so.10.0 00:01:23.133 SYMLINK libspdk_keyring.so 00:01:23.133 SYMLINK libspdk_trace.so 00:01:23.392 CC lib/sock/sock.o 00:01:23.392 CC lib/sock/sock_rpc.o 00:01:23.392 CC lib/thread/thread.o 00:01:23.392 CC lib/thread/iobuf.o 00:01:23.650 LIB libspdk_sock.a 00:01:23.650 SO libspdk_sock.so.9.0 00:01:23.908 SYMLINK libspdk_sock.so 00:01:24.167 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:24.167 CC lib/nvme/nvme_ctrlr.o 00:01:24.167 CC lib/nvme/nvme_fabric.o 00:01:24.167 CC lib/nvme/nvme_ns_cmd.o 00:01:24.167 CC lib/nvme/nvme_pcie.o 00:01:24.167 CC lib/nvme/nvme_ns.o 00:01:24.167 CC lib/nvme/nvme_pcie_common.o 00:01:24.167 CC lib/nvme/nvme_qpair.o 00:01:24.167 CC lib/nvme/nvme.o 00:01:24.167 CC lib/nvme/nvme_quirks.o 00:01:24.167 CC lib/nvme/nvme_transport.o 00:01:24.167 CC lib/nvme/nvme_discovery.o 00:01:24.167 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:24.167 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:24.167 CC lib/nvme/nvme_tcp.o 00:01:24.167 CC lib/nvme/nvme_opal.o 00:01:24.167 CC lib/nvme/nvme_io_msg.o 00:01:24.167 CC lib/nvme/nvme_poll_group.o 00:01:24.167 CC lib/nvme/nvme_zns.o 00:01:24.167 CC lib/nvme/nvme_stubs.o 00:01:24.167 CC lib/nvme/nvme_auth.o 00:01:24.167 CC lib/nvme/nvme_cuse.o 00:01:24.167 CC lib/nvme/nvme_vfio_user.o 00:01:24.167 CC lib/nvme/nvme_rdma.o 00:01:24.425 LIB libspdk_thread.a 00:01:24.425 SO libspdk_thread.so.10.0 00:01:24.683 SYMLINK libspdk_thread.so 00:01:24.941 CC lib/virtio/virtio.o 00:01:24.941 CC lib/virtio/virtio_vhost_user.o 00:01:24.941 CC lib/virtio/virtio_vfio_user.o 00:01:24.941 CC lib/virtio/virtio_pci.o 00:01:24.941 CC lib/blob/blobstore.o 00:01:24.941 CC lib/blob/zeroes.o 00:01:24.941 CC lib/blob/blob_bs_dev.o 00:01:24.941 CC lib/blob/request.o 00:01:24.941 CC lib/init/subsystem.o 00:01:24.941 CC lib/init/subsystem_rpc.o 00:01:24.941 CC lib/accel/accel.o 00:01:24.941 CC lib/init/json_config.o 00:01:24.941 CC lib/accel/accel_sw.o 00:01:24.941 CC lib/accel/accel_rpc.o 00:01:24.941 CC lib/init/rpc.o 00:01:24.941 CC lib/vfu_tgt/tgt_rpc.o 00:01:24.941 CC lib/vfu_tgt/tgt_endpoint.o 00:01:24.941 LIB libspdk_init.a 00:01:25.199 SO libspdk_init.so.5.0 00:01:25.199 LIB libspdk_vfu_tgt.a 00:01:25.199 LIB libspdk_virtio.a 00:01:25.199 SO libspdk_virtio.so.7.0 00:01:25.199 SO libspdk_vfu_tgt.so.3.0 00:01:25.199 SYMLINK libspdk_init.so 00:01:25.199 SYMLINK libspdk_vfu_tgt.so 00:01:25.199 SYMLINK libspdk_virtio.so 00:01:25.457 CC lib/event/app.o 00:01:25.457 CC lib/event/log_rpc.o 00:01:25.457 CC lib/event/reactor.o 00:01:25.457 CC lib/event/app_rpc.o 00:01:25.457 CC lib/event/scheduler_static.o 00:01:25.457 LIB libspdk_accel.a 00:01:25.716 LIB libspdk_nvme.a 00:01:25.716 SO libspdk_accel.so.15.0 00:01:25.716 SYMLINK libspdk_accel.so 00:01:25.716 SO libspdk_nvme.so.13.0 00:01:25.716 LIB libspdk_event.a 00:01:25.716 SO libspdk_event.so.13.0 00:01:25.974 SYMLINK libspdk_event.so 00:01:25.974 CC lib/bdev/bdev.o 00:01:25.974 CC lib/bdev/part.o 00:01:25.974 CC lib/bdev/bdev_rpc.o 00:01:25.974 CC lib/bdev/bdev_zone.o 00:01:25.974 CC lib/bdev/scsi_nvme.o 00:01:25.974 SYMLINK libspdk_nvme.so 00:01:26.909 LIB libspdk_blob.a 00:01:26.909 SO libspdk_blob.so.11.0 00:01:26.909 SYMLINK libspdk_blob.so 00:01:27.168 CC lib/lvol/lvol.o 00:01:27.169 CC lib/blobfs/blobfs.o 00:01:27.169 CC lib/blobfs/tree.o 00:01:27.736 LIB libspdk_bdev.a 00:01:27.736 SO libspdk_bdev.so.15.0 00:01:27.736 LIB libspdk_blobfs.a 00:01:27.736 SYMLINK libspdk_bdev.so 00:01:27.736 SO libspdk_blobfs.so.10.0 00:01:28.028 LIB libspdk_lvol.a 00:01:28.028 SO libspdk_lvol.so.10.0 00:01:28.028 SYMLINK libspdk_blobfs.so 00:01:28.028 SYMLINK libspdk_lvol.so 00:01:28.028 CC lib/ftl/ftl_core.o 00:01:28.028 CC lib/ftl/ftl_init.o 00:01:28.028 CC lib/ftl/ftl_layout.o 00:01:28.028 CC lib/ftl/ftl_debug.o 00:01:28.028 CC lib/ftl/ftl_io.o 00:01:28.028 CC lib/nbd/nbd.o 00:01:28.028 CC lib/ftl/ftl_sb.o 00:01:28.028 CC lib/ftl/ftl_l2p.o 00:01:28.028 CC lib/nbd/nbd_rpc.o 00:01:28.028 CC lib/ftl/ftl_l2p_flat.o 00:01:28.028 CC lib/ftl/ftl_nv_cache.o 00:01:28.028 CC lib/scsi/dev.o 00:01:28.028 CC lib/ftl/ftl_band.o 00:01:28.028 CC lib/ftl/ftl_writer.o 00:01:28.028 CC lib/scsi/lun.o 00:01:28.028 CC lib/ftl/ftl_band_ops.o 00:01:28.028 CC lib/scsi/scsi_bdev.o 00:01:28.028 CC lib/scsi/port.o 00:01:28.028 CC lib/ftl/ftl_rq.o 00:01:28.028 CC lib/scsi/scsi.o 00:01:28.028 CC lib/ftl/ftl_reloc.o 00:01:28.028 CC lib/scsi/scsi_pr.o 00:01:28.028 CC lib/ftl/ftl_l2p_cache.o 00:01:28.028 CC lib/scsi/scsi_rpc.o 00:01:28.028 CC lib/ftl/ftl_p2l.o 00:01:28.028 CC lib/scsi/task.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:28.028 CC lib/nvmf/ctrlr.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:28.028 CC lib/nvmf/ctrlr_bdev.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:28.028 CC lib/ublk/ublk.o 00:01:28.028 CC lib/nvmf/ctrlr_discovery.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:28.028 CC lib/nvmf/subsystem.o 00:01:28.028 CC lib/ublk/ublk_rpc.o 00:01:28.028 CC lib/nvmf/nvmf.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:28.028 CC lib/nvmf/tcp.o 00:01:28.028 CC lib/nvmf/nvmf_rpc.o 00:01:28.028 CC lib/nvmf/transport.o 00:01:28.028 CC lib/nvmf/rdma.o 00:01:28.028 CC lib/nvmf/stubs.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:28.028 CC lib/nvmf/vfio_user.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:28.028 CC lib/nvmf/auth.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:28.028 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:28.028 CC lib/ftl/utils/ftl_conf.o 00:01:28.028 CC lib/ftl/utils/ftl_md.o 00:01:28.028 CC lib/ftl/utils/ftl_property.o 00:01:28.028 CC lib/ftl/utils/ftl_mempool.o 00:01:28.028 CC lib/ftl/utils/ftl_bitmap.o 00:01:28.287 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:28.287 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:28.287 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:28.287 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:28.287 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:28.287 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:28.287 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:28.287 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:28.287 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:28.287 CC lib/ftl/base/ftl_base_bdev.o 00:01:28.287 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:28.287 CC lib/ftl/base/ftl_base_dev.o 00:01:28.287 CC lib/ftl/ftl_trace.o 00:01:28.546 LIB libspdk_nbd.a 00:01:28.806 SO libspdk_nbd.so.7.0 00:01:28.806 SYMLINK libspdk_nbd.so 00:01:28.806 LIB libspdk_scsi.a 00:01:28.806 SO libspdk_scsi.so.9.0 00:01:28.806 LIB libspdk_ublk.a 00:01:28.806 SO libspdk_ublk.so.3.0 00:01:29.064 SYMLINK libspdk_scsi.so 00:01:29.064 SYMLINK libspdk_ublk.so 00:01:29.064 LIB libspdk_ftl.a 00:01:29.064 SO libspdk_ftl.so.9.0 00:01:29.323 CC lib/iscsi/iscsi.o 00:01:29.323 CC lib/iscsi/conn.o 00:01:29.323 CC lib/iscsi/init_grp.o 00:01:29.323 CC lib/iscsi/md5.o 00:01:29.323 CC lib/iscsi/param.o 00:01:29.323 CC lib/vhost/vhost.o 00:01:29.323 CC lib/iscsi/portal_grp.o 00:01:29.323 CC lib/iscsi/tgt_node.o 00:01:29.323 CC lib/vhost/vhost_blk.o 00:01:29.323 CC lib/vhost/vhost_rpc.o 00:01:29.323 CC lib/iscsi/iscsi_subsystem.o 00:01:29.323 CC lib/iscsi/iscsi_rpc.o 00:01:29.323 CC lib/vhost/vhost_scsi.o 00:01:29.323 CC lib/vhost/rte_vhost_user.o 00:01:29.323 CC lib/iscsi/task.o 00:01:29.323 SYMLINK libspdk_ftl.so 00:01:29.889 LIB libspdk_nvmf.a 00:01:29.889 SO libspdk_nvmf.so.18.0 00:01:29.889 LIB libspdk_vhost.a 00:01:30.147 SYMLINK libspdk_nvmf.so 00:01:30.147 SO libspdk_vhost.so.8.0 00:01:30.147 SYMLINK libspdk_vhost.so 00:01:30.147 LIB libspdk_iscsi.a 00:01:30.147 SO libspdk_iscsi.so.8.0 00:01:30.406 SYMLINK libspdk_iscsi.so 00:01:30.972 CC module/env_dpdk/env_dpdk_rpc.o 00:01:30.972 CC module/vfu_device/vfu_virtio.o 00:01:30.972 CC module/vfu_device/vfu_virtio_blk.o 00:01:30.972 CC module/vfu_device/vfu_virtio_scsi.o 00:01:30.972 CC module/vfu_device/vfu_virtio_rpc.o 00:01:30.972 LIB libspdk_env_dpdk_rpc.a 00:01:30.972 CC module/keyring/file/keyring.o 00:01:30.972 CC module/keyring/file/keyring_rpc.o 00:01:30.972 CC module/accel/ioat/accel_ioat.o 00:01:30.972 CC module/accel/ioat/accel_ioat_rpc.o 00:01:30.972 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:30.972 CC module/accel/iaa/accel_iaa_rpc.o 00:01:30.972 CC module/accel/iaa/accel_iaa.o 00:01:30.972 CC module/sock/posix/posix.o 00:01:30.972 SO libspdk_env_dpdk_rpc.so.6.0 00:01:30.972 CC module/blob/bdev/blob_bdev.o 00:01:30.972 CC module/accel/error/accel_error.o 00:01:30.972 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:30.972 CC module/accel/error/accel_error_rpc.o 00:01:30.972 CC module/scheduler/gscheduler/gscheduler.o 00:01:30.972 CC module/accel/dsa/accel_dsa.o 00:01:30.972 CC module/accel/dsa/accel_dsa_rpc.o 00:01:30.972 SYMLINK libspdk_env_dpdk_rpc.so 00:01:31.228 LIB libspdk_keyring_file.a 00:01:31.228 SO libspdk_keyring_file.so.1.0 00:01:31.228 LIB libspdk_scheduler_gscheduler.a 00:01:31.228 LIB libspdk_accel_ioat.a 00:01:31.228 LIB libspdk_scheduler_dpdk_governor.a 00:01:31.228 LIB libspdk_accel_iaa.a 00:01:31.228 LIB libspdk_accel_error.a 00:01:31.228 SO libspdk_scheduler_gscheduler.so.4.0 00:01:31.228 LIB libspdk_scheduler_dynamic.a 00:01:31.228 SO libspdk_accel_ioat.so.6.0 00:01:31.228 SO libspdk_scheduler_dynamic.so.4.0 00:01:31.228 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:31.228 SO libspdk_accel_error.so.2.0 00:01:31.228 SO libspdk_accel_iaa.so.3.0 00:01:31.228 SYMLINK libspdk_keyring_file.so 00:01:31.228 SYMLINK libspdk_scheduler_gscheduler.so 00:01:31.228 LIB libspdk_blob_bdev.a 00:01:31.228 LIB libspdk_accel_dsa.a 00:01:31.228 SYMLINK libspdk_accel_error.so 00:01:31.228 SYMLINK libspdk_accel_ioat.so 00:01:31.228 SO libspdk_blob_bdev.so.11.0 00:01:31.228 SYMLINK libspdk_scheduler_dynamic.so 00:01:31.228 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:31.228 SYMLINK libspdk_accel_iaa.so 00:01:31.228 SO libspdk_accel_dsa.so.5.0 00:01:31.228 SYMLINK libspdk_blob_bdev.so 00:01:31.228 SYMLINK libspdk_accel_dsa.so 00:01:31.485 LIB libspdk_vfu_device.a 00:01:31.485 SO libspdk_vfu_device.so.3.0 00:01:31.485 SYMLINK libspdk_vfu_device.so 00:01:31.485 LIB libspdk_sock_posix.a 00:01:31.485 SO libspdk_sock_posix.so.6.0 00:01:31.742 SYMLINK libspdk_sock_posix.so 00:01:31.742 CC module/blobfs/bdev/blobfs_bdev.o 00:01:31.742 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:31.742 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:31.742 CC module/bdev/malloc/bdev_malloc.o 00:01:31.742 CC module/bdev/nvme/bdev_nvme.o 00:01:31.742 CC module/bdev/nvme/nvme_rpc.o 00:01:31.742 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:31.742 CC module/bdev/nvme/vbdev_opal.o 00:01:31.742 CC module/bdev/nvme/bdev_mdns_client.o 00:01:31.742 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:31.742 CC module/bdev/lvol/vbdev_lvol.o 00:01:31.742 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:31.742 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:31.742 CC module/bdev/iscsi/bdev_iscsi.o 00:01:31.742 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:31.742 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:31.742 CC module/bdev/raid/bdev_raid.o 00:01:31.742 CC module/bdev/raid/bdev_raid_sb.o 00:01:31.742 CC module/bdev/raid/bdev_raid_rpc.o 00:01:31.742 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:31.742 CC module/bdev/raid/raid0.o 00:01:31.742 CC module/bdev/error/vbdev_error_rpc.o 00:01:31.742 CC module/bdev/error/vbdev_error.o 00:01:31.742 CC module/bdev/raid/concat.o 00:01:31.742 CC module/bdev/raid/raid1.o 00:01:31.742 CC module/bdev/split/vbdev_split.o 00:01:31.742 CC module/bdev/aio/bdev_aio.o 00:01:31.742 CC module/bdev/split/vbdev_split_rpc.o 00:01:31.742 CC module/bdev/aio/bdev_aio_rpc.o 00:01:31.742 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:31.742 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:31.742 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:31.742 CC module/bdev/delay/vbdev_delay.o 00:01:31.742 CC module/bdev/passthru/vbdev_passthru.o 00:01:31.742 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:31.742 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:31.742 CC module/bdev/null/bdev_null.o 00:01:31.742 CC module/bdev/gpt/gpt.o 00:01:31.742 CC module/bdev/ftl/bdev_ftl.o 00:01:31.742 CC module/bdev/gpt/vbdev_gpt.o 00:01:31.742 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:31.742 CC module/bdev/null/bdev_null_rpc.o 00:01:31.999 LIB libspdk_blobfs_bdev.a 00:01:31.999 SO libspdk_blobfs_bdev.so.6.0 00:01:31.999 LIB libspdk_bdev_null.a 00:01:31.999 SYMLINK libspdk_blobfs_bdev.so 00:01:31.999 LIB libspdk_bdev_split.a 00:01:31.999 LIB libspdk_bdev_malloc.a 00:01:31.999 SO libspdk_bdev_null.so.6.0 00:01:31.999 SO libspdk_bdev_split.so.6.0 00:01:31.999 LIB libspdk_bdev_gpt.a 00:01:31.999 LIB libspdk_bdev_ftl.a 00:01:31.999 LIB libspdk_bdev_error.a 00:01:31.999 SO libspdk_bdev_malloc.so.6.0 00:01:31.999 LIB libspdk_bdev_delay.a 00:01:31.999 LIB libspdk_bdev_passthru.a 00:01:31.999 LIB libspdk_bdev_aio.a 00:01:31.999 SO libspdk_bdev_error.so.6.0 00:01:31.999 SO libspdk_bdev_gpt.so.6.0 00:01:31.999 SYMLINK libspdk_bdev_null.so 00:01:31.999 SO libspdk_bdev_ftl.so.6.0 00:01:31.999 LIB libspdk_bdev_zone_block.a 00:01:32.257 SO libspdk_bdev_passthru.so.6.0 00:01:32.257 SYMLINK libspdk_bdev_split.so 00:01:32.257 LIB libspdk_bdev_iscsi.a 00:01:32.257 SO libspdk_bdev_delay.so.6.0 00:01:32.257 SO libspdk_bdev_aio.so.6.0 00:01:32.257 SYMLINK libspdk_bdev_malloc.so 00:01:32.257 SO libspdk_bdev_zone_block.so.6.0 00:01:32.257 SO libspdk_bdev_iscsi.so.6.0 00:01:32.257 SYMLINK libspdk_bdev_error.so 00:01:32.257 SYMLINK libspdk_bdev_ftl.so 00:01:32.257 SYMLINK libspdk_bdev_passthru.so 00:01:32.257 SYMLINK libspdk_bdev_gpt.so 00:01:32.257 SYMLINK libspdk_bdev_delay.so 00:01:32.257 SYMLINK libspdk_bdev_aio.so 00:01:32.257 SYMLINK libspdk_bdev_iscsi.so 00:01:32.257 LIB libspdk_bdev_lvol.a 00:01:32.257 LIB libspdk_bdev_virtio.a 00:01:32.257 SYMLINK libspdk_bdev_zone_block.so 00:01:32.257 SO libspdk_bdev_lvol.so.6.0 00:01:32.257 SO libspdk_bdev_virtio.so.6.0 00:01:32.257 SYMLINK libspdk_bdev_virtio.so 00:01:32.257 SYMLINK libspdk_bdev_lvol.so 00:01:32.514 LIB libspdk_bdev_raid.a 00:01:32.514 SO libspdk_bdev_raid.so.6.0 00:01:32.514 SYMLINK libspdk_bdev_raid.so 00:01:33.448 LIB libspdk_bdev_nvme.a 00:01:33.448 SO libspdk_bdev_nvme.so.7.0 00:01:33.448 SYMLINK libspdk_bdev_nvme.so 00:01:34.015 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:34.015 CC module/event/subsystems/scheduler/scheduler.o 00:01:34.015 CC module/event/subsystems/keyring/keyring.o 00:01:34.015 CC module/event/subsystems/vmd/vmd.o 00:01:34.015 CC module/event/subsystems/iobuf/iobuf.o 00:01:34.015 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:34.015 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:34.015 CC module/event/subsystems/sock/sock.o 00:01:34.015 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:34.015 LIB libspdk_event_sock.a 00:01:34.015 LIB libspdk_event_vfu_tgt.a 00:01:34.015 LIB libspdk_event_scheduler.a 00:01:34.274 LIB libspdk_event_keyring.a 00:01:34.274 LIB libspdk_event_vhost_blk.a 00:01:34.274 SO libspdk_event_sock.so.5.0 00:01:34.274 LIB libspdk_event_vmd.a 00:01:34.274 SO libspdk_event_vfu_tgt.so.3.0 00:01:34.274 LIB libspdk_event_iobuf.a 00:01:34.274 SO libspdk_event_scheduler.so.4.0 00:01:34.274 SO libspdk_event_vhost_blk.so.3.0 00:01:34.274 SO libspdk_event_keyring.so.1.0 00:01:34.274 SO libspdk_event_iobuf.so.3.0 00:01:34.274 SO libspdk_event_vmd.so.6.0 00:01:34.274 SYMLINK libspdk_event_sock.so 00:01:34.274 SYMLINK libspdk_event_vfu_tgt.so 00:01:34.274 SYMLINK libspdk_event_vhost_blk.so 00:01:34.274 SYMLINK libspdk_event_scheduler.so 00:01:34.274 SYMLINK libspdk_event_keyring.so 00:01:34.274 SYMLINK libspdk_event_iobuf.so 00:01:34.274 SYMLINK libspdk_event_vmd.so 00:01:34.534 CC module/event/subsystems/accel/accel.o 00:01:34.793 LIB libspdk_event_accel.a 00:01:34.793 SO libspdk_event_accel.so.6.0 00:01:34.793 SYMLINK libspdk_event_accel.so 00:01:35.052 CC module/event/subsystems/bdev/bdev.o 00:01:35.053 LIB libspdk_event_bdev.a 00:01:35.053 SO libspdk_event_bdev.so.6.0 00:01:35.312 SYMLINK libspdk_event_bdev.so 00:01:35.570 CC module/event/subsystems/scsi/scsi.o 00:01:35.570 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:35.571 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:35.571 CC module/event/subsystems/nbd/nbd.o 00:01:35.571 CC module/event/subsystems/ublk/ublk.o 00:01:35.571 LIB libspdk_event_scsi.a 00:01:35.571 SO libspdk_event_scsi.so.6.0 00:01:35.571 LIB libspdk_event_nbd.a 00:01:35.571 LIB libspdk_event_ublk.a 00:01:35.571 SO libspdk_event_nbd.so.6.0 00:01:35.571 LIB libspdk_event_nvmf.a 00:01:35.571 SYMLINK libspdk_event_scsi.so 00:01:35.571 SO libspdk_event_ublk.so.3.0 00:01:35.829 SO libspdk_event_nvmf.so.6.0 00:01:35.829 SYMLINK libspdk_event_nbd.so 00:01:35.829 SYMLINK libspdk_event_ublk.so 00:01:35.829 SYMLINK libspdk_event_nvmf.so 00:01:36.087 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:36.087 CC module/event/subsystems/iscsi/iscsi.o 00:01:36.087 LIB libspdk_event_vhost_scsi.a 00:01:36.087 LIB libspdk_event_iscsi.a 00:01:36.087 SO libspdk_event_vhost_scsi.so.3.0 00:01:36.087 SO libspdk_event_iscsi.so.6.0 00:01:36.087 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.345 SYMLINK libspdk_event_iscsi.so 00:01:36.345 SO libspdk.so.6.0 00:01:36.345 SYMLINK libspdk.so 00:01:36.606 CC app/trace_record/trace_record.o 00:01:36.606 CXX app/trace/trace.o 00:01:36.606 CC app/spdk_nvme_identify/identify.o 00:01:36.606 CC app/spdk_nvme_discover/discovery_aer.o 00:01:36.606 CC app/spdk_top/spdk_top.o 00:01:36.606 CC app/spdk_nvme_perf/perf.o 00:01:36.606 CC app/spdk_lspci/spdk_lspci.o 00:01:36.606 TEST_HEADER include/spdk/accel.h 00:01:36.606 CC test/rpc_client/rpc_client_test.o 00:01:36.606 TEST_HEADER include/spdk/assert.h 00:01:36.606 TEST_HEADER include/spdk/barrier.h 00:01:36.606 TEST_HEADER include/spdk/accel_module.h 00:01:36.606 TEST_HEADER include/spdk/base64.h 00:01:36.606 TEST_HEADER include/spdk/bdev.h 00:01:36.606 TEST_HEADER include/spdk/bdev_module.h 00:01:36.606 TEST_HEADER include/spdk/bit_pool.h 00:01:36.606 TEST_HEADER include/spdk/bdev_zone.h 00:01:36.606 TEST_HEADER include/spdk/bit_array.h 00:01:36.606 TEST_HEADER include/spdk/blobfs.h 00:01:36.606 TEST_HEADER include/spdk/blob_bdev.h 00:01:36.606 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:36.606 TEST_HEADER include/spdk/blob.h 00:01:36.606 TEST_HEADER include/spdk/config.h 00:01:36.606 TEST_HEADER include/spdk/conf.h 00:01:36.870 TEST_HEADER include/spdk/cpuset.h 00:01:36.870 TEST_HEADER include/spdk/crc32.h 00:01:36.870 TEST_HEADER include/spdk/crc16.h 00:01:36.870 TEST_HEADER include/spdk/crc64.h 00:01:36.870 CC app/nvmf_tgt/nvmf_main.o 00:01:36.870 TEST_HEADER include/spdk/dma.h 00:01:36.870 TEST_HEADER include/spdk/dif.h 00:01:36.870 TEST_HEADER include/spdk/env_dpdk.h 00:01:36.870 TEST_HEADER include/spdk/endian.h 00:01:36.870 TEST_HEADER include/spdk/env.h 00:01:36.870 TEST_HEADER include/spdk/event.h 00:01:36.870 TEST_HEADER include/spdk/fd_group.h 00:01:36.870 TEST_HEADER include/spdk/fd.h 00:01:36.870 TEST_HEADER include/spdk/file.h 00:01:36.870 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:36.870 TEST_HEADER include/spdk/ftl.h 00:01:36.870 TEST_HEADER include/spdk/histogram_data.h 00:01:36.870 CC app/vhost/vhost.o 00:01:36.870 TEST_HEADER include/spdk/gpt_spec.h 00:01:36.870 TEST_HEADER include/spdk/idxd.h 00:01:36.870 TEST_HEADER include/spdk/hexlify.h 00:01:36.870 TEST_HEADER include/spdk/init.h 00:01:36.870 TEST_HEADER include/spdk/ioat.h 00:01:36.870 TEST_HEADER include/spdk/ioat_spec.h 00:01:36.870 CC app/spdk_dd/spdk_dd.o 00:01:36.870 TEST_HEADER include/spdk/idxd_spec.h 00:01:36.870 TEST_HEADER include/spdk/json.h 00:01:36.870 TEST_HEADER include/spdk/iscsi_spec.h 00:01:36.871 TEST_HEADER include/spdk/jsonrpc.h 00:01:36.871 TEST_HEADER include/spdk/keyring.h 00:01:36.871 TEST_HEADER include/spdk/keyring_module.h 00:01:36.871 TEST_HEADER include/spdk/log.h 00:01:36.871 TEST_HEADER include/spdk/likely.h 00:01:36.871 TEST_HEADER include/spdk/memory.h 00:01:36.871 TEST_HEADER include/spdk/mmio.h 00:01:36.871 TEST_HEADER include/spdk/lvol.h 00:01:36.871 TEST_HEADER include/spdk/nbd.h 00:01:36.871 TEST_HEADER include/spdk/notify.h 00:01:36.871 TEST_HEADER include/spdk/nvme.h 00:01:36.871 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:36.871 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:36.871 TEST_HEADER include/spdk/nvme_intel.h 00:01:36.871 TEST_HEADER include/spdk/nvme_zns.h 00:01:36.871 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:36.871 CC app/iscsi_tgt/iscsi_tgt.o 00:01:36.871 TEST_HEADER include/spdk/nvme_spec.h 00:01:36.871 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:36.871 TEST_HEADER include/spdk/nvmf.h 00:01:36.871 TEST_HEADER include/spdk/opal.h 00:01:36.871 TEST_HEADER include/spdk/nvmf_spec.h 00:01:36.871 TEST_HEADER include/spdk/nvmf_transport.h 00:01:36.871 TEST_HEADER include/spdk/opal_spec.h 00:01:36.871 TEST_HEADER include/spdk/pci_ids.h 00:01:36.871 TEST_HEADER include/spdk/pipe.h 00:01:36.871 TEST_HEADER include/spdk/rpc.h 00:01:36.871 TEST_HEADER include/spdk/queue.h 00:01:36.871 TEST_HEADER include/spdk/reduce.h 00:01:36.871 TEST_HEADER include/spdk/scsi.h 00:01:36.871 TEST_HEADER include/spdk/scheduler.h 00:01:36.871 CC app/spdk_tgt/spdk_tgt.o 00:01:36.871 TEST_HEADER include/spdk/sock.h 00:01:36.871 TEST_HEADER include/spdk/scsi_spec.h 00:01:36.871 TEST_HEADER include/spdk/stdinc.h 00:01:36.871 TEST_HEADER include/spdk/thread.h 00:01:36.871 TEST_HEADER include/spdk/string.h 00:01:36.871 TEST_HEADER include/spdk/trace.h 00:01:36.871 TEST_HEADER include/spdk/trace_parser.h 00:01:36.871 TEST_HEADER include/spdk/tree.h 00:01:36.871 TEST_HEADER include/spdk/ublk.h 00:01:36.871 TEST_HEADER include/spdk/util.h 00:01:36.871 TEST_HEADER include/spdk/uuid.h 00:01:36.871 TEST_HEADER include/spdk/version.h 00:01:36.871 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:36.871 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:36.871 TEST_HEADER include/spdk/vhost.h 00:01:36.871 TEST_HEADER include/spdk/vmd.h 00:01:36.871 TEST_HEADER include/spdk/xor.h 00:01:36.871 TEST_HEADER include/spdk/zipf.h 00:01:36.871 CXX test/cpp_headers/accel.o 00:01:36.871 CXX test/cpp_headers/assert.o 00:01:36.871 CXX test/cpp_headers/accel_module.o 00:01:36.871 CXX test/cpp_headers/barrier.o 00:01:36.871 CXX test/cpp_headers/base64.o 00:01:36.871 CXX test/cpp_headers/bdev_module.o 00:01:36.871 CXX test/cpp_headers/bdev_zone.o 00:01:36.871 CXX test/cpp_headers/bdev.o 00:01:36.871 CXX test/cpp_headers/bit_array.o 00:01:36.871 CXX test/cpp_headers/bit_pool.o 00:01:36.871 CXX test/cpp_headers/blob_bdev.o 00:01:36.871 CXX test/cpp_headers/blobfs_bdev.o 00:01:36.871 CXX test/cpp_headers/blobfs.o 00:01:36.871 CXX test/cpp_headers/blob.o 00:01:36.871 CXX test/cpp_headers/conf.o 00:01:36.871 CXX test/cpp_headers/config.o 00:01:36.871 CXX test/cpp_headers/cpuset.o 00:01:36.871 CXX test/cpp_headers/crc16.o 00:01:36.871 CXX test/cpp_headers/crc32.o 00:01:36.871 CXX test/cpp_headers/dif.o 00:01:36.871 CXX test/cpp_headers/crc64.o 00:01:36.871 CXX test/cpp_headers/dma.o 00:01:36.871 CC examples/vmd/lsvmd/lsvmd.o 00:01:36.871 CC examples/nvme/abort/abort.o 00:01:36.871 CC examples/ioat/verify/verify.o 00:01:36.871 CC examples/vmd/led/led.o 00:01:36.871 CC examples/nvme/hello_world/hello_world.o 00:01:36.871 CC test/nvme/startup/startup.o 00:01:36.871 CC test/app/stub/stub.o 00:01:36.871 CC app/fio/nvme/fio_plugin.o 00:01:36.871 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:36.871 CC test/app/jsoncat/jsoncat.o 00:01:36.871 CC examples/accel/perf/accel_perf.o 00:01:36.871 CC test/thread/poller_perf/poller_perf.o 00:01:36.871 CC test/event/reactor_perf/reactor_perf.o 00:01:36.871 CC test/nvme/reset/reset.o 00:01:36.871 CC examples/ioat/perf/perf.o 00:01:36.871 CC examples/sock/hello_world/hello_sock.o 00:01:36.871 CC test/blobfs/mkfs/mkfs.o 00:01:36.871 CC test/nvme/simple_copy/simple_copy.o 00:01:36.871 CC test/nvme/compliance/nvme_compliance.o 00:01:36.871 CC examples/idxd/perf/perf.o 00:01:36.871 CC test/app/histogram_perf/histogram_perf.o 00:01:36.871 CC examples/util/zipf/zipf.o 00:01:36.871 CC test/nvme/fused_ordering/fused_ordering.o 00:01:36.871 CC test/nvme/connect_stress/connect_stress.o 00:01:36.871 CC test/nvme/e2edp/nvme_dp.o 00:01:36.871 CC examples/nvme/hotplug/hotplug.o 00:01:37.137 CC test/nvme/overhead/overhead.o 00:01:37.137 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:37.137 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:37.137 CC examples/nvme/reconnect/reconnect.o 00:01:37.137 CC test/nvme/reserve/reserve.o 00:01:37.137 CC examples/nvme/arbitration/arbitration.o 00:01:37.137 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:37.137 CC test/nvme/boot_partition/boot_partition.o 00:01:37.137 CC test/nvme/fdp/fdp.o 00:01:37.137 CC examples/nvmf/nvmf/nvmf.o 00:01:37.137 CC test/nvme/cuse/cuse.o 00:01:37.137 CC test/nvme/err_injection/err_injection.o 00:01:37.137 CC test/nvme/sgl/sgl.o 00:01:37.137 CC test/env/pci/pci_ut.o 00:01:37.137 CC test/dma/test_dma/test_dma.o 00:01:37.137 CC test/event/event_perf/event_perf.o 00:01:37.137 CC examples/bdev/hello_world/hello_bdev.o 00:01:37.137 CC examples/blob/cli/blobcli.o 00:01:37.137 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:37.137 CC test/env/memory/memory_ut.o 00:01:37.137 CC test/event/app_repeat/app_repeat.o 00:01:37.137 CC test/nvme/aer/aer.o 00:01:37.137 CC examples/thread/thread/thread_ex.o 00:01:37.137 CC test/env/vtophys/vtophys.o 00:01:37.137 CC examples/bdev/bdevperf/bdevperf.o 00:01:37.137 CC test/app/bdev_svc/bdev_svc.o 00:01:37.137 CC test/event/reactor/reactor.o 00:01:37.137 CC test/accel/dif/dif.o 00:01:37.137 CC app/fio/bdev/fio_plugin.o 00:01:37.137 CC examples/blob/hello_world/hello_blob.o 00:01:37.137 CC test/event/scheduler/scheduler.o 00:01:37.137 CC test/bdev/bdevio/bdevio.o 00:01:37.137 LINK spdk_lspci 00:01:37.137 LINK spdk_nvme_discover 00:01:37.137 LINK nvmf_tgt 00:01:37.137 LINK vhost 00:01:37.137 LINK spdk_trace_record 00:01:37.400 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.400 LINK iscsi_tgt 00:01:37.400 LINK rpc_client_test 00:01:37.400 LINK interrupt_tgt 00:01:37.400 LINK lsvmd 00:01:37.400 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.400 CC test/lvol/esnap/esnap.o 00:01:37.400 LINK jsoncat 00:01:37.400 CXX test/cpp_headers/endian.o 00:01:37.400 LINK stub 00:01:37.400 CXX test/cpp_headers/env_dpdk.o 00:01:37.400 CXX test/cpp_headers/env.o 00:01:37.400 LINK vtophys 00:01:37.400 CXX test/cpp_headers/event.o 00:01:37.400 CXX test/cpp_headers/fd_group.o 00:01:37.400 CXX test/cpp_headers/fd.o 00:01:37.400 LINK spdk_tgt 00:01:37.400 LINK event_perf 00:01:37.400 CXX test/cpp_headers/ftl.o 00:01:37.400 CXX test/cpp_headers/gpt_spec.o 00:01:37.400 CXX test/cpp_headers/file.o 00:01:37.400 LINK app_repeat 00:01:37.400 LINK mkfs 00:01:37.400 LINK reactor 00:01:37.400 LINK env_dpdk_post_init 00:01:37.400 CXX test/cpp_headers/hexlify.o 00:01:37.400 CXX test/cpp_headers/histogram_data.o 00:01:37.400 LINK reactor_perf 00:01:37.400 LINK led 00:01:37.400 CXX test/cpp_headers/idxd.o 00:01:37.400 CXX test/cpp_headers/idxd_spec.o 00:01:37.400 LINK err_injection 00:01:37.400 LINK histogram_perf 00:01:37.400 LINK poller_perf 00:01:37.400 CXX test/cpp_headers/init.o 00:01:37.400 LINK spdk_dd 00:01:37.400 LINK verify 00:01:37.400 LINK simple_copy 00:01:37.400 LINK zipf 00:01:37.400 CXX test/cpp_headers/ioat.o 00:01:37.400 LINK connect_stress 00:01:37.400 LINK pmr_persistence 00:01:37.400 LINK startup 00:01:37.659 CXX test/cpp_headers/ioat_spec.o 00:01:37.659 LINK boot_partition 00:01:37.659 LINK hello_bdev 00:01:37.659 LINK fused_ordering 00:01:37.659 LINK nvme_dp 00:01:37.659 LINK thread 00:01:37.659 CXX test/cpp_headers/iscsi_spec.o 00:01:37.659 LINK doorbell_aers 00:01:37.659 CXX test/cpp_headers/json.o 00:01:37.659 CXX test/cpp_headers/jsonrpc.o 00:01:37.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.659 CXX test/cpp_headers/keyring.o 00:01:37.659 LINK cmb_copy 00:01:37.659 LINK reserve 00:01:37.659 LINK bdev_svc 00:01:37.659 CXX test/cpp_headers/keyring_module.o 00:01:37.659 CXX test/cpp_headers/likely.o 00:01:37.659 LINK spdk_trace 00:01:37.659 LINK aer 00:01:37.659 LINK ioat_perf 00:01:37.659 LINK scheduler 00:01:37.659 LINK hello_world 00:01:37.659 LINK nvmf 00:01:37.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.659 LINK nvme_compliance 00:01:37.659 LINK hello_sock 00:01:37.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:37.659 LINK hotplug 00:01:37.659 CXX test/cpp_headers/log.o 00:01:37.659 LINK reset 00:01:37.659 LINK abort 00:01:37.659 CXX test/cpp_headers/mmio.o 00:01:37.659 CXX test/cpp_headers/lvol.o 00:01:37.659 CXX test/cpp_headers/memory.o 00:01:37.659 LINK sgl 00:01:37.659 LINK overhead 00:01:37.659 CXX test/cpp_headers/nbd.o 00:01:37.659 CXX test/cpp_headers/notify.o 00:01:37.659 LINK hello_blob 00:01:37.659 CXX test/cpp_headers/nvme.o 00:01:37.659 CXX test/cpp_headers/nvme_intel.o 00:01:37.659 CXX test/cpp_headers/nvme_ocssd.o 00:01:37.659 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:37.659 CXX test/cpp_headers/nvme_zns.o 00:01:37.659 CXX test/cpp_headers/nvme_spec.o 00:01:37.659 CXX test/cpp_headers/nvmf_cmd.o 00:01:37.659 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:37.659 LINK test_dma 00:01:37.918 CXX test/cpp_headers/nvmf.o 00:01:37.918 LINK idxd_perf 00:01:37.918 CXX test/cpp_headers/nvmf_spec.o 00:01:37.918 LINK bdevio 00:01:37.918 CXX test/cpp_headers/nvmf_transport.o 00:01:37.918 LINK arbitration 00:01:37.918 LINK fdp 00:01:37.918 CXX test/cpp_headers/opal.o 00:01:37.918 CXX test/cpp_headers/opal_spec.o 00:01:37.918 LINK reconnect 00:01:37.918 CXX test/cpp_headers/pci_ids.o 00:01:37.918 CXX test/cpp_headers/pipe.o 00:01:37.918 LINK accel_perf 00:01:37.918 CXX test/cpp_headers/queue.o 00:01:37.918 CXX test/cpp_headers/reduce.o 00:01:37.918 CXX test/cpp_headers/rpc.o 00:01:37.918 CXX test/cpp_headers/scheduler.o 00:01:37.918 CXX test/cpp_headers/scsi.o 00:01:37.918 CXX test/cpp_headers/scsi_spec.o 00:01:37.918 CXX test/cpp_headers/sock.o 00:01:37.918 CXX test/cpp_headers/stdinc.o 00:01:37.918 CXX test/cpp_headers/string.o 00:01:37.918 CXX test/cpp_headers/thread.o 00:01:37.918 CXX test/cpp_headers/trace.o 00:01:37.918 CXX test/cpp_headers/trace_parser.o 00:01:37.918 LINK dif 00:01:37.918 CXX test/cpp_headers/tree.o 00:01:37.918 CXX test/cpp_headers/ublk.o 00:01:37.918 CXX test/cpp_headers/util.o 00:01:37.918 CXX test/cpp_headers/uuid.o 00:01:37.918 CXX test/cpp_headers/version.o 00:01:37.918 CXX test/cpp_headers/vfio_user_pci.o 00:01:37.918 LINK blobcli 00:01:37.918 LINK spdk_nvme 00:01:37.918 CXX test/cpp_headers/vfio_user_spec.o 00:01:37.918 CXX test/cpp_headers/vhost.o 00:01:37.918 CXX test/cpp_headers/vmd.o 00:01:37.918 CXX test/cpp_headers/xor.o 00:01:37.918 CXX test/cpp_headers/zipf.o 00:01:37.918 LINK pci_ut 00:01:38.176 LINK nvme_manage 00:01:38.176 LINK spdk_top 00:01:38.176 LINK spdk_bdev 00:01:38.176 LINK nvme_fuzz 00:01:38.176 LINK spdk_nvme_identify 00:01:38.176 LINK spdk_nvme_perf 00:01:38.434 LINK bdevperf 00:01:38.434 LINK vhost_fuzz 00:01:38.434 LINK mem_callbacks 00:01:38.434 LINK memory_ut 00:01:38.694 LINK cuse 00:01:39.262 LINK iscsi_fuzz 00:01:41.166 LINK esnap 00:01:41.166 00:01:41.166 real 0m42.607s 00:01:41.166 user 6m31.526s 00:01:41.166 sys 3m37.745s 00:01:41.166 02:56:12 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:41.166 02:56:12 make -- common/autotest_common.sh@10 -- $ set +x 00:01:41.166 ************************************ 00:01:41.166 END TEST make 00:01:41.166 ************************************ 00:01:41.166 02:56:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:41.166 02:56:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:41.166 02:56:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:41.167 02:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.167 02:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:41.167 02:56:12 -- pm/common@44 -- $ pid=750338 00:01:41.167 02:56:12 -- pm/common@50 -- $ kill -TERM 750338 00:01:41.167 02:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.167 02:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:41.167 02:56:12 -- pm/common@44 -- $ pid=750340 00:01:41.167 02:56:12 -- pm/common@50 -- $ kill -TERM 750340 00:01:41.167 02:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.167 02:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:41.167 02:56:12 -- pm/common@44 -- $ pid=750342 00:01:41.167 02:56:12 -- pm/common@50 -- $ kill -TERM 750342 00:01:41.167 02:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.167 02:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:41.167 02:56:12 -- pm/common@44 -- $ pid=750363 00:01:41.167 02:56:12 -- pm/common@50 -- $ sudo -E kill -TERM 750363 00:01:41.426 02:56:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:41.426 02:56:12 -- nvmf/common.sh@7 -- # uname -s 00:01:41.426 02:56:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:41.426 02:56:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:41.426 02:56:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:41.426 02:56:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:41.426 02:56:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:41.426 02:56:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:41.426 02:56:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:41.426 02:56:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:41.426 02:56:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:41.426 02:56:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:41.426 02:56:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:41.426 02:56:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:41.426 02:56:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:41.426 02:56:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:41.426 02:56:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:41.426 02:56:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:41.426 02:56:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.426 02:56:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:41.426 02:56:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.426 02:56:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.426 02:56:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.426 02:56:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.426 02:56:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.426 02:56:12 -- paths/export.sh@5 -- # export PATH 00:01:41.426 02:56:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.426 02:56:12 -- nvmf/common.sh@47 -- # : 0 00:01:41.426 02:56:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:41.426 02:56:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:41.426 02:56:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:41.426 02:56:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:41.426 02:56:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:41.426 02:56:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:41.426 02:56:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:41.426 02:56:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:41.426 02:56:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:41.426 02:56:12 -- spdk/autotest.sh@32 -- # uname -s 00:01:41.426 02:56:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:41.426 02:56:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:41.426 02:56:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:41.426 02:56:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:41.426 02:56:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:41.426 02:56:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:41.426 02:56:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:41.426 02:56:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:41.426 02:56:12 -- spdk/autotest.sh@48 -- # udevadm_pid=808525 00:01:41.426 02:56:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:41.426 02:56:12 -- pm/common@17 -- # local monitor 00:01:41.426 02:56:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:41.426 02:56:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.426 02:56:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.426 02:56:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.426 02:56:12 -- pm/common@21 -- # date +%s 00:01:41.426 02:56:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.426 02:56:12 -- pm/common@21 -- # date +%s 00:01:41.426 02:56:12 -- pm/common@25 -- # sleep 1 00:01:41.426 02:56:12 -- pm/common@21 -- # date +%s 00:01:41.426 02:56:12 -- pm/common@21 -- # date +%s 00:01:41.426 02:56:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734572 00:01:41.426 02:56:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734572 00:01:41.426 02:56:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734572 00:01:41.426 02:56:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715734572 00:01:41.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734572_collect-vmstat.pm.log 00:01:41.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734572_collect-cpu-temp.pm.log 00:01:41.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734572_collect-cpu-load.pm.log 00:01:41.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715734572_collect-bmc-pm.bmc.pm.log 00:01:42.364 02:56:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:42.364 02:56:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:42.364 02:56:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:42.364 02:56:13 -- common/autotest_common.sh@10 -- # set +x 00:01:42.364 02:56:13 -- spdk/autotest.sh@59 -- # create_test_list 00:01:42.364 02:56:13 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:42.364 02:56:13 -- common/autotest_common.sh@10 -- # set +x 00:01:42.364 02:56:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:42.364 02:56:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.364 02:56:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.364 02:56:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:42.364 02:56:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.364 02:56:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:42.364 02:56:13 -- common/autotest_common.sh@1451 -- # uname 00:01:42.364 02:56:13 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:42.364 02:56:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:42.364 02:56:13 -- common/autotest_common.sh@1471 -- # uname 00:01:42.364 02:56:13 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:42.364 02:56:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:42.364 02:56:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:42.364 02:56:13 -- spdk/autotest.sh@72 -- # hash lcov 00:01:42.364 02:56:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:42.364 02:56:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:42.364 --rc lcov_branch_coverage=1 00:01:42.364 --rc lcov_function_coverage=1 00:01:42.364 --rc genhtml_branch_coverage=1 00:01:42.364 --rc genhtml_function_coverage=1 00:01:42.364 --rc genhtml_legend=1 00:01:42.364 --rc geninfo_all_blocks=1 00:01:42.364 ' 00:01:42.364 02:56:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:42.364 --rc lcov_branch_coverage=1 00:01:42.364 --rc lcov_function_coverage=1 00:01:42.364 --rc genhtml_branch_coverage=1 00:01:42.364 --rc genhtml_function_coverage=1 00:01:42.364 --rc genhtml_legend=1 00:01:42.364 --rc geninfo_all_blocks=1 00:01:42.364 ' 00:01:42.364 02:56:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:42.364 --rc lcov_branch_coverage=1 00:01:42.364 --rc lcov_function_coverage=1 00:01:42.364 --rc genhtml_branch_coverage=1 00:01:42.364 --rc genhtml_function_coverage=1 00:01:42.364 --rc genhtml_legend=1 00:01:42.364 --rc geninfo_all_blocks=1 00:01:42.364 --no-external' 00:01:42.364 02:56:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:42.364 --rc lcov_branch_coverage=1 00:01:42.364 --rc lcov_function_coverage=1 00:01:42.364 --rc genhtml_branch_coverage=1 00:01:42.364 --rc genhtml_function_coverage=1 00:01:42.364 --rc genhtml_legend=1 00:01:42.364 --rc geninfo_all_blocks=1 00:01:42.364 --no-external' 00:01:42.365 02:56:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:42.621 lcov: LCOV version 1.14 00:01:42.621 02:56:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:50.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:50.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:51.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:51.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:51.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:51.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:51.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:51.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:03.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:03.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:03.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:03.561 02:56:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:03.561 02:56:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:03.561 02:56:34 -- common/autotest_common.sh@10 -- # set +x 00:02:03.561 02:56:34 -- spdk/autotest.sh@91 -- # rm -f 00:02:03.561 02:56:34 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:06.100 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:06.101 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:06.101 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:06.359 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:06.618 02:56:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:06.618 02:56:37 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:06.618 02:56:37 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:06.618 02:56:37 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:06.618 02:56:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:06.618 02:56:37 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:06.618 02:56:37 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:06.618 02:56:37 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:06.618 02:56:37 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:06.618 02:56:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:06.618 02:56:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:06.618 02:56:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:06.618 02:56:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:06.618 02:56:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:06.618 02:56:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:06.618 No valid GPT data, bailing 00:02:06.618 02:56:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:06.618 02:56:37 -- scripts/common.sh@391 -- # pt= 00:02:06.618 02:56:37 -- scripts/common.sh@392 -- # return 1 00:02:06.618 02:56:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:06.619 1+0 records in 00:02:06.619 1+0 records out 00:02:06.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444825 s, 236 MB/s 00:02:06.619 02:56:37 -- spdk/autotest.sh@118 -- # sync 00:02:06.619 02:56:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:06.619 02:56:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:06.619 02:56:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:11.896 02:56:42 -- spdk/autotest.sh@124 -- # uname -s 00:02:11.896 02:56:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:11.896 02:56:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:11.896 02:56:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:11.896 02:56:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:11.896 02:56:42 -- common/autotest_common.sh@10 -- # set +x 00:02:11.896 ************************************ 00:02:11.896 START TEST setup.sh 00:02:11.896 ************************************ 00:02:11.896 02:56:42 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:11.896 * Looking for test storage... 00:02:11.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:11.896 02:56:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:11.896 02:56:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:11.896 02:56:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:11.896 02:56:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:11.896 02:56:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:11.896 02:56:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:11.896 ************************************ 00:02:11.896 START TEST acl 00:02:11.896 ************************************ 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:11.896 * Looking for test storage... 00:02:11.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:11.896 02:56:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:11.896 02:56:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:11.896 02:56:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:11.896 02:56:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:15.191 02:56:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:15.191 02:56:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:15.191 02:56:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.191 02:56:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:15.191 02:56:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:15.191 02:56:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:17.098 Hugepages 00:02:17.098 node hugesize free / total 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.098 00:02:17.098 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.098 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.099 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:17.358 02:56:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:17.358 02:56:48 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:17.358 02:56:48 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:17.358 02:56:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:17.358 ************************************ 00:02:17.358 START TEST denied 00:02:17.358 ************************************ 00:02:17.358 02:56:48 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:17.358 02:56:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:17.358 02:56:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:17.358 02:56:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:17.358 02:56:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.358 02:56:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:20.651 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.651 02:56:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.846 00:02:24.846 real 0m6.721s 00:02:24.846 user 0m2.238s 00:02:24.846 sys 0m3.819s 00:02:24.846 02:56:55 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:24.846 02:56:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:24.846 ************************************ 00:02:24.846 END TEST denied 00:02:24.846 ************************************ 00:02:24.846 02:56:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:24.846 02:56:55 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:24.846 02:56:55 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:24.846 02:56:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:24.846 ************************************ 00:02:24.846 START TEST allowed 00:02:24.846 ************************************ 00:02:24.846 02:56:55 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:24.846 02:56:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:24.846 02:56:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:24.846 02:56:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:24.846 02:56:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.846 02:56:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.202 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.202 02:56:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:28.202 02:56:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:28.202 02:56:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:28.202 02:56:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.202 02:56:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.493 00:02:31.493 real 0m6.862s 00:02:31.493 user 0m2.093s 00:02:31.493 sys 0m3.901s 00:02:31.493 02:57:02 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:31.493 02:57:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:31.493 ************************************ 00:02:31.493 END TEST allowed 00:02:31.493 ************************************ 00:02:31.493 00:02:31.493 real 0m19.530s 00:02:31.493 user 0m6.588s 00:02:31.493 sys 0m11.610s 00:02:31.493 02:57:02 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:31.493 02:57:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:31.493 ************************************ 00:02:31.493 END TEST acl 00:02:31.493 ************************************ 00:02:31.493 02:57:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:31.493 02:57:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:31.493 02:57:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:31.493 02:57:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:31.493 ************************************ 00:02:31.493 START TEST hugepages 00:02:31.493 ************************************ 00:02:31.493 02:57:02 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:31.493 * Looking for test storage... 00:02:31.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172754264 kB' 'MemAvailable: 175781704 kB' 'Buffers: 3888 kB' 'Cached: 10740696 kB' 'SwapCached: 0 kB' 'Active: 7712948 kB' 'Inactive: 3489408 kB' 'Active(anon): 7147396 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461208 kB' 'Mapped: 168140 kB' 'Shmem: 6689624 kB' 'KReclaimable: 233392 kB' 'Slab: 795784 kB' 'SReclaimable: 233392 kB' 'SUnreclaim: 562392 kB' 'KernelStack: 20464 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 8501904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314768 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.493 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:31.494 02:57:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:31.495 02:57:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:31.495 02:57:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:31.495 02:57:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:31.495 02:57:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:31.495 ************************************ 00:02:31.495 START TEST default_setup 00:02:31.495 ************************************ 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.495 02:57:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.033 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:34.033 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:34.978 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174918928 kB' 'MemAvailable: 177946336 kB' 'Buffers: 3888 kB' 'Cached: 10740800 kB' 'SwapCached: 0 kB' 'Active: 7726500 kB' 'Inactive: 3489408 kB' 'Active(anon): 7160948 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475188 kB' 'Mapped: 167188 kB' 'Shmem: 6689728 kB' 'KReclaimable: 233328 kB' 'Slab: 794564 kB' 'SReclaimable: 233328 kB' 'SUnreclaim: 561236 kB' 'KernelStack: 20848 kB' 'PageTables: 9700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8521876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315148 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.978 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.979 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174918624 kB' 'MemAvailable: 177946016 kB' 'Buffers: 3888 kB' 'Cached: 10740804 kB' 'SwapCached: 0 kB' 'Active: 7729192 kB' 'Inactive: 3489408 kB' 'Active(anon): 7163640 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477860 kB' 'Mapped: 167744 kB' 'Shmem: 6689732 kB' 'KReclaimable: 233296 kB' 'Slab: 794644 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561348 kB' 'KernelStack: 21136 kB' 'PageTables: 10288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8521324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315084 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.980 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:34.981 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174910580 kB' 'MemAvailable: 177937972 kB' 'Buffers: 3888 kB' 'Cached: 10740816 kB' 'SwapCached: 0 kB' 'Active: 7732136 kB' 'Inactive: 3489408 kB' 'Active(anon): 7166584 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480232 kB' 'Mapped: 167692 kB' 'Shmem: 6689744 kB' 'KReclaimable: 233296 kB' 'Slab: 794584 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561288 kB' 'KernelStack: 21072 kB' 'PageTables: 10444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8525652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315104 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.982 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.983 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:34.984 nr_hugepages=1024 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.984 resv_hugepages=0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.984 surplus_hugepages=0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.984 anon_hugepages=0 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174909432 kB' 'MemAvailable: 177936824 kB' 'Buffers: 3888 kB' 'Cached: 10740840 kB' 'SwapCached: 0 kB' 'Active: 7726932 kB' 'Inactive: 3489408 kB' 'Active(anon): 7161380 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475044 kB' 'Mapped: 167696 kB' 'Shmem: 6689768 kB' 'KReclaimable: 233296 kB' 'Slab: 794552 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561256 kB' 'KernelStack: 20656 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8520776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315132 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.984 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.985 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91358704 kB' 'MemUsed: 6256924 kB' 'SwapCached: 0 kB' 'Active: 2984656 kB' 'Inactive: 98692 kB' 'Active(anon): 2652256 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736376 kB' 'Mapped: 113664 kB' 'AnonPages: 350168 kB' 'Shmem: 2305284 kB' 'KernelStack: 13016 kB' 'PageTables: 5928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 372264 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.986 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:34.987 node0=1024 expecting 1024 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:34.987 00:02:34.987 real 0m3.766s 00:02:34.987 user 0m1.130s 00:02:34.987 sys 0m1.798s 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:34.987 02:57:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:34.987 ************************************ 00:02:34.987 END TEST default_setup 00:02:34.987 ************************************ 00:02:35.247 02:57:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:35.247 02:57:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:35.247 02:57:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:35.247 02:57:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.247 ************************************ 00:02:35.247 START TEST per_node_1G_alloc 00:02:35.247 ************************************ 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:35.247 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.248 02:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.786 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.786 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:37.786 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.786 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.787 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174942176 kB' 'MemAvailable: 177969568 kB' 'Buffers: 3888 kB' 'Cached: 10740944 kB' 'SwapCached: 0 kB' 'Active: 7727684 kB' 'Inactive: 3489408 kB' 'Active(anon): 7162132 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475028 kB' 'Mapped: 166596 kB' 'Shmem: 6689872 kB' 'KReclaimable: 233296 kB' 'Slab: 794940 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561644 kB' 'KernelStack: 20288 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8505192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314896 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:37.787 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.053 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.054 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174942896 kB' 'MemAvailable: 177970288 kB' 'Buffers: 3888 kB' 'Cached: 10740948 kB' 'SwapCached: 0 kB' 'Active: 7721884 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156332 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469352 kB' 'Mapped: 166564 kB' 'Shmem: 6689876 kB' 'KReclaimable: 233296 kB' 'Slab: 794932 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561636 kB' 'KernelStack: 20288 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8499092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314892 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.055 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.056 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174943308 kB' 'MemAvailable: 177970700 kB' 'Buffers: 3888 kB' 'Cached: 10740964 kB' 'SwapCached: 0 kB' 'Active: 7721256 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155704 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469172 kB' 'Mapped: 166044 kB' 'Shmem: 6689892 kB' 'KReclaimable: 233296 kB' 'Slab: 794888 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561592 kB' 'KernelStack: 20272 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8499112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314892 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.057 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.058 nr_hugepages=1024 00:02:38.058 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.058 resv_hugepages=0 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.059 surplus_hugepages=0 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.059 anon_hugepages=0 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174943880 kB' 'MemAvailable: 177971272 kB' 'Buffers: 3888 kB' 'Cached: 10741008 kB' 'SwapCached: 0 kB' 'Active: 7720960 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155408 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468780 kB' 'Mapped: 166044 kB' 'Shmem: 6689936 kB' 'KReclaimable: 233296 kB' 'Slab: 794888 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 561592 kB' 'KernelStack: 20256 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8499136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314892 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.059 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.060 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92404864 kB' 'MemUsed: 5210764 kB' 'SwapCached: 0 kB' 'Active: 2982848 kB' 'Inactive: 98692 kB' 'Active(anon): 2650448 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736440 kB' 'Mapped: 112024 kB' 'AnonPages: 348268 kB' 'Shmem: 2305348 kB' 'KernelStack: 12632 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 372468 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.061 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 82539072 kB' 'MemUsed: 11226464 kB' 'SwapCached: 0 kB' 'Active: 4738832 kB' 'Inactive: 3390716 kB' 'Active(anon): 4505680 kB' 'Inactive(anon): 0 kB' 'Active(file): 233152 kB' 'Inactive(file): 3390716 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8008480 kB' 'Mapped: 54020 kB' 'AnonPages: 121240 kB' 'Shmem: 4384612 kB' 'KernelStack: 7656 kB' 'PageTables: 3060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126732 kB' 'Slab: 422420 kB' 'SReclaimable: 126732 kB' 'SUnreclaim: 295688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.062 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.063 node0=512 expecting 512 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.063 node1=512 expecting 512 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.063 00:02:38.063 real 0m2.926s 00:02:38.063 user 0m1.205s 00:02:38.063 sys 0m1.774s 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:38.063 02:57:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.063 ************************************ 00:02:38.063 END TEST per_node_1G_alloc 00:02:38.063 ************************************ 00:02:38.063 02:57:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:38.064 02:57:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.064 02:57:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.064 02:57:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.064 ************************************ 00:02:38.064 START TEST even_2G_alloc 00:02:38.064 ************************************ 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.064 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.323 02:57:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.229 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.229 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:40.229 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.499 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174987452 kB' 'MemAvailable: 178014844 kB' 'Buffers: 3888 kB' 'Cached: 10741092 kB' 'SwapCached: 0 kB' 'Active: 7721432 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155880 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469660 kB' 'Mapped: 166144 kB' 'Shmem: 6690020 kB' 'KReclaimable: 233296 kB' 'Slab: 794196 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560900 kB' 'KernelStack: 20480 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314908 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174986484 kB' 'MemAvailable: 178013876 kB' 'Buffers: 3888 kB' 'Cached: 10741096 kB' 'SwapCached: 0 kB' 'Active: 7721512 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155960 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469400 kB' 'Mapped: 166144 kB' 'Shmem: 6690024 kB' 'KReclaimable: 233296 kB' 'Slab: 794124 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560828 kB' 'KernelStack: 20512 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8502124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.502 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174984332 kB' 'MemAvailable: 178011724 kB' 'Buffers: 3888 kB' 'Cached: 10741112 kB' 'SwapCached: 0 kB' 'Active: 7721460 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155908 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469360 kB' 'Mapped: 166136 kB' 'Shmem: 6690040 kB' 'KReclaimable: 233296 kB' 'Slab: 794124 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560828 kB' 'KernelStack: 20656 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8503484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314908 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.503 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.504 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:40.505 nr_hugepages=1024 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.505 resv_hugepages=0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.505 surplus_hugepages=0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.505 anon_hugepages=0 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174989652 kB' 'MemAvailable: 178017044 kB' 'Buffers: 3888 kB' 'Cached: 10741136 kB' 'SwapCached: 0 kB' 'Active: 7721100 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155548 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468912 kB' 'Mapped: 166052 kB' 'Shmem: 6690064 kB' 'KReclaimable: 233296 kB' 'Slab: 794060 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560764 kB' 'KernelStack: 20368 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8502168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314892 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.505 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.506 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92407444 kB' 'MemUsed: 5208184 kB' 'SwapCached: 0 kB' 'Active: 2982324 kB' 'Inactive: 98692 kB' 'Active(anon): 2649924 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736440 kB' 'Mapped: 112032 kB' 'AnonPages: 347784 kB' 'Shmem: 2305348 kB' 'KernelStack: 12824 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 371944 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.507 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.508 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 82582656 kB' 'MemUsed: 11182880 kB' 'SwapCached: 0 kB' 'Active: 4739052 kB' 'Inactive: 3390716 kB' 'Active(anon): 4505900 kB' 'Inactive(anon): 0 kB' 'Active(file): 233152 kB' 'Inactive(file): 3390716 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8008620 kB' 'Mapped: 54020 kB' 'AnonPages: 121320 kB' 'Shmem: 4384752 kB' 'KernelStack: 7592 kB' 'PageTables: 2804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126732 kB' 'Slab: 422116 kB' 'SReclaimable: 126732 kB' 'SUnreclaim: 295384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.509 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:40.510 node0=512 expecting 512 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:40.510 node1=512 expecting 512 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:40.510 00:02:40.510 real 0m2.418s 00:02:40.510 user 0m0.894s 00:02:40.510 sys 0m1.495s 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:40.510 02:57:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.510 ************************************ 00:02:40.510 END TEST even_2G_alloc 00:02:40.510 ************************************ 00:02:40.770 02:57:11 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:40.770 02:57:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:40.770 02:57:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:40.770 02:57:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.770 ************************************ 00:02:40.770 START TEST odd_alloc 00:02:40.770 ************************************ 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.770 02:57:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.310 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.310 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:43.310 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175002440 kB' 'MemAvailable: 178029832 kB' 'Buffers: 3888 kB' 'Cached: 10741248 kB' 'SwapCached: 0 kB' 'Active: 7721992 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156440 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469168 kB' 'Mapped: 166164 kB' 'Shmem: 6690176 kB' 'KReclaimable: 233296 kB' 'Slab: 794168 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560872 kB' 'KernelStack: 20320 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8500192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.575 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:43.576 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175002288 kB' 'MemAvailable: 178029680 kB' 'Buffers: 3888 kB' 'Cached: 10741252 kB' 'SwapCached: 0 kB' 'Active: 7721728 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156176 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468888 kB' 'Mapped: 166140 kB' 'Shmem: 6690180 kB' 'KReclaimable: 233296 kB' 'Slab: 794136 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560840 kB' 'KernelStack: 20288 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8500208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.577 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.578 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.579 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175002748 kB' 'MemAvailable: 178030140 kB' 'Buffers: 3888 kB' 'Cached: 10741268 kB' 'SwapCached: 0 kB' 'Active: 7721284 kB' 'Inactive: 3489408 kB' 'Active(anon): 7155732 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468888 kB' 'Mapped: 166064 kB' 'Shmem: 6690196 kB' 'KReclaimable: 233296 kB' 'Slab: 794064 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560768 kB' 'KernelStack: 20304 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8500232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.580 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.581 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:43.582 nr_hugepages=1025 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.582 resv_hugepages=0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.582 surplus_hugepages=0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.582 anon_hugepages=0 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175003252 kB' 'MemAvailable: 178030644 kB' 'Buffers: 3888 kB' 'Cached: 10741268 kB' 'SwapCached: 0 kB' 'Active: 7721604 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156052 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469296 kB' 'Mapped: 166064 kB' 'Shmem: 6690196 kB' 'KReclaimable: 233296 kB' 'Slab: 794064 kB' 'SReclaimable: 233296 kB' 'SUnreclaim: 560768 kB' 'KernelStack: 20368 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8502388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.582 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.583 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92415288 kB' 'MemUsed: 5200340 kB' 'SwapCached: 0 kB' 'Active: 2982148 kB' 'Inactive: 98692 kB' 'Active(anon): 2649748 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736444 kB' 'Mapped: 112044 kB' 'AnonPages: 347580 kB' 'Shmem: 2305352 kB' 'KernelStack: 12632 kB' 'PageTables: 5064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 372264 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.584 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.585 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 82588768 kB' 'MemUsed: 11176768 kB' 'SwapCached: 0 kB' 'Active: 4739212 kB' 'Inactive: 3390716 kB' 'Active(anon): 4506060 kB' 'Inactive(anon): 0 kB' 'Active(file): 233152 kB' 'Inactive(file): 3390716 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8008772 kB' 'Mapped: 54024 kB' 'AnonPages: 121296 kB' 'Shmem: 4384904 kB' 'KernelStack: 7672 kB' 'PageTables: 2952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126732 kB' 'Slab: 421800 kB' 'SReclaimable: 126732 kB' 'SUnreclaim: 295068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.586 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.587 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:43.588 node0=512 expecting 513 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:43.588 node1=513 expecting 512 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:43.588 00:02:43.588 real 0m2.922s 00:02:43.588 user 0m1.186s 00:02:43.588 sys 0m1.799s 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:43.588 02:57:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.588 ************************************ 00:02:43.588 END TEST odd_alloc 00:02:43.588 ************************************ 00:02:43.588 02:57:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:43.588 02:57:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:43.588 02:57:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:43.588 02:57:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.588 ************************************ 00:02:43.588 START TEST custom_alloc 00:02:43.588 ************************************ 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:43.588 02:57:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.589 02:57:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.125 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:46.125 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.388 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:46.388 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:46.388 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:46.388 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:46.388 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:46.389 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173978116 kB' 'MemAvailable: 177005468 kB' 'Buffers: 3888 kB' 'Cached: 10741404 kB' 'SwapCached: 0 kB' 'Active: 7722312 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156760 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469200 kB' 'Mapped: 166176 kB' 'Shmem: 6690332 kB' 'KReclaimable: 233216 kB' 'Slab: 793912 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560696 kB' 'KernelStack: 20176 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8500740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.389 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.390 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173978408 kB' 'MemAvailable: 177005760 kB' 'Buffers: 3888 kB' 'Cached: 10741408 kB' 'SwapCached: 0 kB' 'Active: 7722008 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156456 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469364 kB' 'Mapped: 166076 kB' 'Shmem: 6690336 kB' 'KReclaimable: 233216 kB' 'Slab: 793856 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560640 kB' 'KernelStack: 20160 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8500756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.391 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.392 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173976644 kB' 'MemAvailable: 177003996 kB' 'Buffers: 3888 kB' 'Cached: 10741424 kB' 'SwapCached: 0 kB' 'Active: 7724392 kB' 'Inactive: 3489408 kB' 'Active(anon): 7158840 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471724 kB' 'Mapped: 166580 kB' 'Shmem: 6690352 kB' 'KReclaimable: 233216 kB' 'Slab: 793856 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560640 kB' 'KernelStack: 20128 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8503984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314668 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.393 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.660 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:46.661 nr_hugepages=1536 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.661 resv_hugepages=0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.661 surplus_hugepages=0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.661 anon_hugepages=0 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.661 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173974376 kB' 'MemAvailable: 177001728 kB' 'Buffers: 3888 kB' 'Cached: 10741444 kB' 'SwapCached: 0 kB' 'Active: 7722084 kB' 'Inactive: 3489408 kB' 'Active(anon): 7156532 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469452 kB' 'Mapped: 166508 kB' 'Shmem: 6690372 kB' 'KReclaimable: 233216 kB' 'Slab: 793856 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560640 kB' 'KernelStack: 20160 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8500800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.662 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.663 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92432752 kB' 'MemUsed: 5182876 kB' 'SwapCached: 0 kB' 'Active: 2983608 kB' 'Inactive: 98692 kB' 'Active(anon): 2651208 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736480 kB' 'Mapped: 112056 kB' 'AnonPages: 349004 kB' 'Shmem: 2305388 kB' 'KernelStack: 12536 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 372036 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.664 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.665 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 81541624 kB' 'MemUsed: 12223912 kB' 'SwapCached: 0 kB' 'Active: 4738644 kB' 'Inactive: 3390716 kB' 'Active(anon): 4505492 kB' 'Inactive(anon): 0 kB' 'Active(file): 233152 kB' 'Inactive(file): 3390716 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8008872 kB' 'Mapped: 54020 kB' 'AnonPages: 120632 kB' 'Shmem: 4385004 kB' 'KernelStack: 7608 kB' 'PageTables: 2908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126652 kB' 'Slab: 421820 kB' 'SReclaimable: 126652 kB' 'SUnreclaim: 295168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.666 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:46.667 node0=512 expecting 512 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:46.667 node1=1024 expecting 1024 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:46.667 00:02:46.667 real 0m2.951s 00:02:46.667 user 0m1.172s 00:02:46.667 sys 0m1.831s 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:46.667 02:57:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.667 ************************************ 00:02:46.668 END TEST custom_alloc 00:02:46.668 ************************************ 00:02:46.668 02:57:17 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:46.668 02:57:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:46.668 02:57:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:46.668 02:57:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.668 ************************************ 00:02:46.668 START TEST no_shrink_alloc 00:02:46.668 ************************************ 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.668 02:57:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.265 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.265 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:49.265 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.530 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174984328 kB' 'MemAvailable: 178011680 kB' 'Buffers: 3888 kB' 'Cached: 10741548 kB' 'SwapCached: 0 kB' 'Active: 7725636 kB' 'Inactive: 3489408 kB' 'Active(anon): 7160084 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472580 kB' 'Mapped: 166188 kB' 'Shmem: 6690476 kB' 'KReclaimable: 233216 kB' 'Slab: 793424 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560208 kB' 'KernelStack: 20176 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174984148 kB' 'MemAvailable: 178011500 kB' 'Buffers: 3888 kB' 'Cached: 10741552 kB' 'SwapCached: 0 kB' 'Active: 7725244 kB' 'Inactive: 3489408 kB' 'Active(anon): 7159692 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472700 kB' 'Mapped: 166104 kB' 'Shmem: 6690480 kB' 'KReclaimable: 233216 kB' 'Slab: 793400 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560184 kB' 'KernelStack: 20176 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314684 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174983644 kB' 'MemAvailable: 178010996 kB' 'Buffers: 3888 kB' 'Cached: 10741572 kB' 'SwapCached: 0 kB' 'Active: 7725412 kB' 'Inactive: 3489408 kB' 'Active(anon): 7159860 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472856 kB' 'Mapped: 166104 kB' 'Shmem: 6690500 kB' 'KReclaimable: 233216 kB' 'Slab: 793400 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560184 kB' 'KernelStack: 20224 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.536 nr_hugepages=1024 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.536 resv_hugepages=0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.536 surplus_hugepages=0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.536 anon_hugepages=0 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174982832 kB' 'MemAvailable: 178010184 kB' 'Buffers: 3888 kB' 'Cached: 10741592 kB' 'SwapCached: 0 kB' 'Active: 7725056 kB' 'Inactive: 3489408 kB' 'Active(anon): 7159504 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472468 kB' 'Mapped: 166104 kB' 'Shmem: 6690520 kB' 'KReclaimable: 233216 kB' 'Slab: 793400 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 560184 kB' 'KernelStack: 20192 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8505736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.537 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91402392 kB' 'MemUsed: 6213236 kB' 'SwapCached: 0 kB' 'Active: 2986112 kB' 'Inactive: 98692 kB' 'Active(anon): 2653712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736508 kB' 'Mapped: 112080 kB' 'AnonPages: 351584 kB' 'Shmem: 2305416 kB' 'KernelStack: 12536 kB' 'PageTables: 5012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 371692 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 265128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.539 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.799 node0=1024 expecting 1024 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.799 02:57:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.339 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.339 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.339 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.339 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.339 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.340 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.340 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174997308 kB' 'MemAvailable: 178024660 kB' 'Buffers: 3888 kB' 'Cached: 10741684 kB' 'SwapCached: 0 kB' 'Active: 7724260 kB' 'Inactive: 3489408 kB' 'Active(anon): 7158708 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470932 kB' 'Mapped: 166212 kB' 'Shmem: 6690612 kB' 'KReclaimable: 233216 kB' 'Slab: 793080 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 559864 kB' 'KernelStack: 20176 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314636 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.340 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.341 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174997748 kB' 'MemAvailable: 178025100 kB' 'Buffers: 3888 kB' 'Cached: 10741688 kB' 'SwapCached: 0 kB' 'Active: 7723060 kB' 'Inactive: 3489408 kB' 'Active(anon): 7157508 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470220 kB' 'Mapped: 166112 kB' 'Shmem: 6690616 kB' 'KReclaimable: 233216 kB' 'Slab: 793012 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 559796 kB' 'KernelStack: 20160 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314604 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.342 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.343 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174997748 kB' 'MemAvailable: 178025100 kB' 'Buffers: 3888 kB' 'Cached: 10741688 kB' 'SwapCached: 0 kB' 'Active: 7723096 kB' 'Inactive: 3489408 kB' 'Active(anon): 7157544 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470256 kB' 'Mapped: 166112 kB' 'Shmem: 6690616 kB' 'KReclaimable: 233216 kB' 'Slab: 793012 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 559796 kB' 'KernelStack: 20176 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314620 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.344 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.607 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.608 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.609 nr_hugepages=1024 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.609 resv_hugepages=0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.609 surplus_hugepages=0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.609 anon_hugepages=0 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175000588 kB' 'MemAvailable: 178027940 kB' 'Buffers: 3888 kB' 'Cached: 10741728 kB' 'SwapCached: 0 kB' 'Active: 7723100 kB' 'Inactive: 3489408 kB' 'Active(anon): 7157548 kB' 'Inactive(anon): 0 kB' 'Active(file): 565552 kB' 'Inactive(file): 3489408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470208 kB' 'Mapped: 166112 kB' 'Shmem: 6690656 kB' 'KReclaimable: 233216 kB' 'Slab: 793012 kB' 'SReclaimable: 233216 kB' 'SUnreclaim: 559796 kB' 'KernelStack: 20160 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8501984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314636 kB' 'VmallocChunk: 0 kB' 'Percpu: 71424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2786260 kB' 'DirectMap2M: 14718976 kB' 'DirectMap1G: 184549376 kB' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.609 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.610 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91405512 kB' 'MemUsed: 6210116 kB' 'SwapCached: 0 kB' 'Active: 2984516 kB' 'Inactive: 98692 kB' 'Active(anon): 2652116 kB' 'Inactive(anon): 0 kB' 'Active(file): 332400 kB' 'Inactive(file): 98692 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2736540 kB' 'Mapped: 112088 kB' 'AnonPages: 349912 kB' 'Shmem: 2305448 kB' 'KernelStack: 12536 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106564 kB' 'Slab: 371508 kB' 'SReclaimable: 106564 kB' 'SUnreclaim: 264944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.611 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:52.612 node0=1024 expecting 1024 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:52.612 00:02:52.612 real 0m5.867s 00:02:52.612 user 0m2.341s 00:02:52.612 sys 0m3.650s 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:52.612 02:57:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.612 ************************************ 00:02:52.612 END TEST no_shrink_alloc 00:02:52.612 ************************************ 00:02:52.612 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:52.612 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:52.612 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:52.613 02:57:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:52.613 00:02:52.613 real 0m21.426s 00:02:52.613 user 0m8.173s 00:02:52.613 sys 0m12.696s 00:02:52.613 02:57:23 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:52.613 02:57:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.613 ************************************ 00:02:52.613 END TEST hugepages 00:02:52.613 ************************************ 00:02:52.613 02:57:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:52.613 02:57:23 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:52.613 02:57:23 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:52.613 02:57:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.613 ************************************ 00:02:52.613 START TEST driver 00:02:52.613 ************************************ 00:02:52.613 02:57:23 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:52.613 * Looking for test storage... 00:02:52.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.873 02:57:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:52.873 02:57:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.873 02:57:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.072 02:57:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:57.072 02:57:27 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:57.072 02:57:27 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:57.072 02:57:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:57.072 ************************************ 00:02:57.072 START TEST guess_driver 00:02:57.072 ************************************ 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:57.072 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:57.072 Looking for driver=vfio-pci 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.072 02:57:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.620 02:57:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.558 02:57:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.753 00:03:04.753 real 0m7.656s 00:03:04.753 user 0m2.208s 00:03:04.753 sys 0m3.949s 00:03:04.753 02:57:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.753 02:57:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:04.753 ************************************ 00:03:04.753 END TEST guess_driver 00:03:04.753 ************************************ 00:03:04.753 00:03:04.753 real 0m11.818s 00:03:04.753 user 0m3.460s 00:03:04.753 sys 0m6.095s 00:03:04.753 02:57:35 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.753 02:57:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:04.753 ************************************ 00:03:04.753 END TEST driver 00:03:04.753 ************************************ 00:03:04.753 02:57:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.753 02:57:35 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.753 02:57:35 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.753 02:57:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.753 ************************************ 00:03:04.753 START TEST devices 00:03:04.753 ************************************ 00:03:04.753 02:57:35 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.753 * Looking for test storage... 00:03:04.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.753 02:57:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:04.753 02:57:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:04.753 02:57:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.753 02:57:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.046 02:57:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:08.046 02:57:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:08.046 02:57:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:08.046 02:57:38 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:08.046 02:57:38 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:08.046 02:57:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:08.047 02:57:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:08.047 No valid GPT data, bailing 00:03:08.047 02:57:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:08.047 02:57:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:08.047 02:57:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:08.047 02:57:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:08.047 ************************************ 00:03:08.047 START TEST nvme_mount 00:03:08.047 ************************************ 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:08.047 02:57:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:08.616 Creating new GPT entries in memory. 00:03:08.616 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:08.616 other utilities. 00:03:08.616 02:57:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:08.616 02:57:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:08.616 02:57:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:08.616 02:57:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:08.616 02:57:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:09.554 Creating new GPT entries in memory. 00:03:09.554 The operation has completed successfully. 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 840076 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:09.554 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.814 02:57:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:12.351 02:57:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.351 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:12.352 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.352 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:12.352 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:12.352 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:12.352 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.352 02:57:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:14.971 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.231 02:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:17.766 02:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.026 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:18.026 00:03:18.026 real 0m10.477s 00:03:18.026 user 0m3.088s 00:03:18.026 sys 0m5.161s 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.026 02:57:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:18.026 ************************************ 00:03:18.026 END TEST nvme_mount 00:03:18.026 ************************************ 00:03:18.026 02:57:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:18.026 02:57:49 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.026 02:57:49 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.026 02:57:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:18.026 ************************************ 00:03:18.026 START TEST dm_mount 00:03:18.026 ************************************ 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:18.026 02:57:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:19.405 Creating new GPT entries in memory. 00:03:19.405 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:19.405 other utilities. 00:03:19.405 02:57:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:19.405 02:57:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.405 02:57:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:19.405 02:57:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:19.405 02:57:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:20.342 Creating new GPT entries in memory. 00:03:20.342 The operation has completed successfully. 00:03:20.342 02:57:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:20.342 02:57:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:20.342 02:57:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:20.342 02:57:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:20.343 02:57:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:21.281 The operation has completed successfully. 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 844261 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.281 02:57:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.883 02:57:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.883 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:23.884 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:23.884 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.143 02:57:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.682 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.683 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:26.943 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:26.943 00:03:26.943 real 0m8.784s 00:03:26.943 user 0m2.152s 00:03:26.943 sys 0m3.654s 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.943 02:57:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:26.943 ************************************ 00:03:26.943 END TEST dm_mount 00:03:26.943 ************************************ 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.943 02:57:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.203 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:27.203 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:27.203 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:27.203 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.203 02:57:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:27.203 00:03:27.203 real 0m22.680s 00:03:27.203 user 0m6.385s 00:03:27.203 sys 0m10.900s 00:03:27.203 02:57:58 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.203 02:57:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.203 ************************************ 00:03:27.203 END TEST devices 00:03:27.203 ************************************ 00:03:27.203 00:03:27.203 real 1m15.827s 00:03:27.203 user 0m24.744s 00:03:27.203 sys 0m41.550s 00:03:27.203 02:57:58 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:27.203 02:57:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.203 ************************************ 00:03:27.203 END TEST setup.sh 00:03:27.203 ************************************ 00:03:27.203 02:57:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.750 Hugepages 00:03:29.750 node hugesize free / total 00:03:29.750 node0 1048576kB 0 / 0 00:03:29.750 node0 2048kB 2048 / 2048 00:03:29.750 node1 1048576kB 0 / 0 00:03:29.750 node1 2048kB 0 / 0 00:03:29.750 00:03:29.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.750 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:29.750 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:29.750 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:29.750 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:29.750 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:29.750 02:58:00 -- spdk/autotest.sh@130 -- # uname -s 00:03:29.750 02:58:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:29.750 02:58:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:29.750 02:58:00 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.290 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.290 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.290 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.548 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.487 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.487 02:58:04 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:34.426 02:58:05 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:34.426 02:58:05 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:34.426 02:58:05 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.426 02:58:05 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:34.426 02:58:05 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:34.426 02:58:05 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:34.426 02:58:05 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.426 02:58:05 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.426 02:58:05 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:34.685 02:58:05 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:34.685 02:58:05 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:03:34.685 02:58:05 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.269 Waiting for block devices as requested 00:03:37.269 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:37.269 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.548 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.548 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.548 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.548 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.808 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:37.808 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:37.808 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:37.808 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:38.067 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:38.067 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:38.067 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:38.327 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:38.327 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.327 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.586 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.586 02:58:09 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:38.586 02:58:09 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:03:38.586 02:58:09 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:38.586 02:58:09 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:38.586 02:58:09 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:38.586 02:58:09 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:38.586 02:58:09 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:03:38.586 02:58:09 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:38.586 02:58:09 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:38.586 02:58:09 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:38.586 02:58:09 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:38.586 02:58:09 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:38.586 02:58:09 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:38.586 02:58:09 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:38.586 02:58:09 -- common/autotest_common.sh@1553 -- # continue 00:03:38.586 02:58:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:38.586 02:58:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.586 02:58:09 -- common/autotest_common.sh@10 -- # set +x 00:03:38.586 02:58:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:38.586 02:58:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:38.586 02:58:09 -- common/autotest_common.sh@10 -- # set +x 00:03:38.586 02:58:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.125 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.125 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.125 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.125 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.125 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.384 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.385 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.324 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.324 02:58:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:42.324 02:58:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.324 02:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:42.324 02:58:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:42.324 02:58:13 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:42.324 02:58:13 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.324 02:58:13 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:42.324 02:58:13 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:42.324 02:58:13 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:42.324 02:58:13 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:42.324 02:58:13 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:42.324 02:58:13 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.325 02:58:13 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.325 02:58:13 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:42.325 02:58:13 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:42.325 02:58:13 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:03:42.325 02:58:13 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:42.325 02:58:13 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:42.325 02:58:13 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:42.325 02:58:13 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:42.325 02:58:13 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:42.325 02:58:13 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5e:00.0 00:03:42.325 02:58:13 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5e:00.0 ]] 00:03:42.325 02:58:13 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=853050 00:03:42.325 02:58:13 -- common/autotest_common.sh@1594 -- # waitforlisten 853050 00:03:42.325 02:58:13 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.325 02:58:13 -- common/autotest_common.sh@827 -- # '[' -z 853050 ']' 00:03:42.325 02:58:13 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.325 02:58:13 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:42.325 02:58:13 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.325 02:58:13 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:42.325 02:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:42.584 [2024-05-15 02:58:13.501104] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:03:42.584 [2024-05-15 02:58:13.501151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853050 ] 00:03:42.584 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.584 [2024-05-15 02:58:13.555451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.584 [2024-05-15 02:58:13.634779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.151 02:58:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:43.152 02:58:14 -- common/autotest_common.sh@860 -- # return 0 00:03:43.152 02:58:14 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:43.152 02:58:14 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:43.152 02:58:14 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:46.443 nvme0n1 00:03:46.443 02:58:17 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.443 [2024-05-15 02:58:17.434509] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:46.443 request: 00:03:46.443 { 00:03:46.443 "nvme_ctrlr_name": "nvme0", 00:03:46.443 "password": "test", 00:03:46.443 "method": "bdev_nvme_opal_revert", 00:03:46.443 "req_id": 1 00:03:46.443 } 00:03:46.443 Got JSON-RPC error response 00:03:46.443 response: 00:03:46.443 { 00:03:46.443 "code": -32602, 00:03:46.443 "message": "Invalid parameters" 00:03:46.443 } 00:03:46.443 02:58:17 -- common/autotest_common.sh@1600 -- # true 00:03:46.443 02:58:17 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:46.443 02:58:17 -- common/autotest_common.sh@1604 -- # killprocess 853050 00:03:46.443 02:58:17 -- common/autotest_common.sh@946 -- # '[' -z 853050 ']' 00:03:46.443 02:58:17 -- common/autotest_common.sh@950 -- # kill -0 853050 00:03:46.443 02:58:17 -- common/autotest_common.sh@951 -- # uname 00:03:46.443 02:58:17 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:46.443 02:58:17 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 853050 00:03:46.443 02:58:17 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:46.443 02:58:17 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:46.443 02:58:17 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 853050' 00:03:46.443 killing process with pid 853050 00:03:46.443 02:58:17 -- common/autotest_common.sh@965 -- # kill 853050 00:03:46.443 02:58:17 -- common/autotest_common.sh@970 -- # wait 853050 00:03:48.351 02:58:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:48.351 02:58:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:48.351 02:58:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.351 02:58:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.351 02:58:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:48.351 02:58:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:48.351 02:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.351 02:58:19 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.351 02:58:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.351 02:58:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.351 02:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:48.351 ************************************ 00:03:48.351 START TEST env 00:03:48.351 ************************************ 00:03:48.351 02:58:19 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.351 * Looking for test storage... 00:03:48.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:48.351 02:58:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.351 02:58:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.351 02:58:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.351 02:58:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.351 ************************************ 00:03:48.351 START TEST env_memory 00:03:48.351 ************************************ 00:03:48.351 02:58:19 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.351 00:03:48.351 00:03:48.351 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.351 http://cunit.sourceforge.net/ 00:03:48.351 00:03:48.351 00:03:48.351 Suite: memory 00:03:48.351 Test: alloc and free memory map ...[2024-05-15 02:58:19.330212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.351 passed 00:03:48.351 Test: mem map translation ...[2024-05-15 02:58:19.348238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.351 [2024-05-15 02:58:19.348253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.351 [2024-05-15 02:58:19.348287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.351 [2024-05-15 02:58:19.348293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.351 passed 00:03:48.351 Test: mem map registration ...[2024-05-15 02:58:19.384797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:48.351 [2024-05-15 02:58:19.384810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:48.351 passed 00:03:48.351 Test: mem map adjacent registrations ...passed 00:03:48.351 00:03:48.351 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.351 suites 1 1 n/a 0 0 00:03:48.351 tests 4 4 4 0 0 00:03:48.351 asserts 152 152 152 0 n/a 00:03:48.351 00:03:48.351 Elapsed time = 0.137 seconds 00:03:48.351 00:03:48.351 real 0m0.149s 00:03:48.351 user 0m0.140s 00:03:48.351 sys 0m0.008s 00:03:48.351 02:58:19 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.351 02:58:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.351 ************************************ 00:03:48.351 END TEST env_memory 00:03:48.351 ************************************ 00:03:48.351 02:58:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.351 02:58:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.351 02:58:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.351 02:58:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.351 ************************************ 00:03:48.351 START TEST env_vtophys 00:03:48.351 ************************************ 00:03:48.351 02:58:19 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.611 EAL: lib.eal log level changed from notice to debug 00:03:48.611 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.611 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.611 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.611 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.611 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.611 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.611 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.611 EAL: Detected lcore 7 as core 8 on socket 0 00:03:48.611 EAL: Detected lcore 8 as core 9 on socket 0 00:03:48.611 EAL: Detected lcore 9 as core 10 on socket 0 00:03:48.611 EAL: Detected lcore 10 as core 11 on socket 0 00:03:48.611 EAL: Detected lcore 11 as core 12 on socket 0 00:03:48.611 EAL: Detected lcore 12 as core 13 on socket 0 00:03:48.611 EAL: Detected lcore 13 as core 16 on socket 0 00:03:48.611 EAL: Detected lcore 14 as core 17 on socket 0 00:03:48.611 EAL: Detected lcore 15 as core 18 on socket 0 00:03:48.611 EAL: Detected lcore 16 as core 19 on socket 0 00:03:48.611 EAL: Detected lcore 17 as core 20 on socket 0 00:03:48.611 EAL: Detected lcore 18 as core 21 on socket 0 00:03:48.611 EAL: Detected lcore 19 as core 25 on socket 0 00:03:48.611 EAL: Detected lcore 20 as core 26 on socket 0 00:03:48.611 EAL: Detected lcore 21 as core 27 on socket 0 00:03:48.611 EAL: Detected lcore 22 as core 28 on socket 0 00:03:48.611 EAL: Detected lcore 23 as core 29 on socket 0 00:03:48.611 EAL: Detected lcore 24 as core 0 on socket 1 00:03:48.611 EAL: Detected lcore 25 as core 1 on socket 1 00:03:48.611 EAL: Detected lcore 26 as core 2 on socket 1 00:03:48.611 EAL: Detected lcore 27 as core 3 on socket 1 00:03:48.611 EAL: Detected lcore 28 as core 4 on socket 1 00:03:48.611 EAL: Detected lcore 29 as core 5 on socket 1 00:03:48.611 EAL: Detected lcore 30 as core 6 on socket 1 00:03:48.611 EAL: Detected lcore 31 as core 9 on socket 1 00:03:48.611 EAL: Detected lcore 32 as core 10 on socket 1 00:03:48.611 EAL: Detected lcore 33 as core 11 on socket 1 00:03:48.611 EAL: Detected lcore 34 as core 12 on socket 1 00:03:48.611 EAL: Detected lcore 35 as core 13 on socket 1 00:03:48.611 EAL: Detected lcore 36 as core 16 on socket 1 00:03:48.611 EAL: Detected lcore 37 as core 17 on socket 1 00:03:48.611 EAL: Detected lcore 38 as core 18 on socket 1 00:03:48.611 EAL: Detected lcore 39 as core 19 on socket 1 00:03:48.611 EAL: Detected lcore 40 as core 20 on socket 1 00:03:48.611 EAL: Detected lcore 41 as core 21 on socket 1 00:03:48.611 EAL: Detected lcore 42 as core 24 on socket 1 00:03:48.611 EAL: Detected lcore 43 as core 25 on socket 1 00:03:48.611 EAL: Detected lcore 44 as core 26 on socket 1 00:03:48.611 EAL: Detected lcore 45 as core 27 on socket 1 00:03:48.611 EAL: Detected lcore 46 as core 28 on socket 1 00:03:48.611 EAL: Detected lcore 47 as core 29 on socket 1 00:03:48.612 EAL: Detected lcore 48 as core 0 on socket 0 00:03:48.612 EAL: Detected lcore 49 as core 1 on socket 0 00:03:48.612 EAL: Detected lcore 50 as core 2 on socket 0 00:03:48.612 EAL: Detected lcore 51 as core 3 on socket 0 00:03:48.612 EAL: Detected lcore 52 as core 4 on socket 0 00:03:48.612 EAL: Detected lcore 53 as core 5 on socket 0 00:03:48.612 EAL: Detected lcore 54 as core 6 on socket 0 00:03:48.612 EAL: Detected lcore 55 as core 8 on socket 0 00:03:48.612 EAL: Detected lcore 56 as core 9 on socket 0 00:03:48.612 EAL: Detected lcore 57 as core 10 on socket 0 00:03:48.612 EAL: Detected lcore 58 as core 11 on socket 0 00:03:48.612 EAL: Detected lcore 59 as core 12 on socket 0 00:03:48.612 EAL: Detected lcore 60 as core 13 on socket 0 00:03:48.612 EAL: Detected lcore 61 as core 16 on socket 0 00:03:48.612 EAL: Detected lcore 62 as core 17 on socket 0 00:03:48.612 EAL: Detected lcore 63 as core 18 on socket 0 00:03:48.612 EAL: Detected lcore 64 as core 19 on socket 0 00:03:48.612 EAL: Detected lcore 65 as core 20 on socket 0 00:03:48.612 EAL: Detected lcore 66 as core 21 on socket 0 00:03:48.612 EAL: Detected lcore 67 as core 25 on socket 0 00:03:48.612 EAL: Detected lcore 68 as core 26 on socket 0 00:03:48.612 EAL: Detected lcore 69 as core 27 on socket 0 00:03:48.612 EAL: Detected lcore 70 as core 28 on socket 0 00:03:48.612 EAL: Detected lcore 71 as core 29 on socket 0 00:03:48.612 EAL: Detected lcore 72 as core 0 on socket 1 00:03:48.612 EAL: Detected lcore 73 as core 1 on socket 1 00:03:48.612 EAL: Detected lcore 74 as core 2 on socket 1 00:03:48.612 EAL: Detected lcore 75 as core 3 on socket 1 00:03:48.612 EAL: Detected lcore 76 as core 4 on socket 1 00:03:48.612 EAL: Detected lcore 77 as core 5 on socket 1 00:03:48.612 EAL: Detected lcore 78 as core 6 on socket 1 00:03:48.612 EAL: Detected lcore 79 as core 9 on socket 1 00:03:48.612 EAL: Detected lcore 80 as core 10 on socket 1 00:03:48.612 EAL: Detected lcore 81 as core 11 on socket 1 00:03:48.612 EAL: Detected lcore 82 as core 12 on socket 1 00:03:48.612 EAL: Detected lcore 83 as core 13 on socket 1 00:03:48.612 EAL: Detected lcore 84 as core 16 on socket 1 00:03:48.612 EAL: Detected lcore 85 as core 17 on socket 1 00:03:48.612 EAL: Detected lcore 86 as core 18 on socket 1 00:03:48.612 EAL: Detected lcore 87 as core 19 on socket 1 00:03:48.612 EAL: Detected lcore 88 as core 20 on socket 1 00:03:48.612 EAL: Detected lcore 89 as core 21 on socket 1 00:03:48.612 EAL: Detected lcore 90 as core 24 on socket 1 00:03:48.612 EAL: Detected lcore 91 as core 25 on socket 1 00:03:48.612 EAL: Detected lcore 92 as core 26 on socket 1 00:03:48.612 EAL: Detected lcore 93 as core 27 on socket 1 00:03:48.612 EAL: Detected lcore 94 as core 28 on socket 1 00:03:48.612 EAL: Detected lcore 95 as core 29 on socket 1 00:03:48.612 EAL: Maximum logical cores by configuration: 128 00:03:48.612 EAL: Detected CPU lcores: 96 00:03:48.612 EAL: Detected NUMA nodes: 2 00:03:48.612 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:48.612 EAL: Detected shared linkage of DPDK 00:03:48.612 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.612 EAL: Bus pci wants IOVA as 'DC' 00:03:48.612 EAL: Buses did not request a specific IOVA mode. 00:03:48.612 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.612 EAL: Selected IOVA mode 'VA' 00:03:48.612 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.612 EAL: Probing VFIO support... 00:03:48.612 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.612 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.612 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.612 EAL: VFIO support initialized 00:03:48.612 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.612 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.612 EAL: Setting up physically contiguous memory... 00:03:48.612 EAL: Setting maximum number of open files to 524288 00:03:48.612 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.612 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.612 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.612 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.612 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.612 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.612 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.612 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.612 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.612 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.613 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.613 EAL: Hugepages will be freed exactly as allocated. 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: TSC frequency is ~2300000 KHz 00:03:48.613 EAL: Main lcore 0 is ready (tid=7f853bbb5a00;cpuset=[0]) 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 0 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.613 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.613 00:03:48.613 00:03:48.613 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.613 http://cunit.sourceforge.net/ 00:03:48.613 00:03:48.613 00:03:48.613 Suite: components_suite 00:03:48.613 Test: vtophys_malloc_test ...passed 00:03:48.613 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.613 EAL: Trying to obtain current memory policy. 00:03:48.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.613 EAL: Restoring previous memory policy: 4 00:03:48.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.613 EAL: request: mp_malloc_sync 00:03:48.613 EAL: No shared files mode enabled, IPC is disabled 00:03:48.613 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.873 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.873 EAL: request: mp_malloc_sync 00:03:48.873 EAL: No shared files mode enabled, IPC is disabled 00:03:48.873 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.873 EAL: Trying to obtain current memory policy. 00:03:48.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.873 EAL: Restoring previous memory policy: 4 00:03:48.873 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.873 EAL: request: mp_malloc_sync 00:03:48.873 EAL: No shared files mode enabled, IPC is disabled 00:03:48.873 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.873 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.132 EAL: request: mp_malloc_sync 00:03:49.132 EAL: No shared files mode enabled, IPC is disabled 00:03:49.132 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.132 EAL: Trying to obtain current memory policy. 00:03:49.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.132 EAL: Restoring previous memory policy: 4 00:03:49.132 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.132 EAL: request: mp_malloc_sync 00:03:49.132 EAL: No shared files mode enabled, IPC is disabled 00:03:49.132 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.390 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.649 EAL: request: mp_malloc_sync 00:03:49.649 EAL: No shared files mode enabled, IPC is disabled 00:03:49.649 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.649 passed 00:03:49.649 00:03:49.649 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.649 suites 1 1 n/a 0 0 00:03:49.649 tests 2 2 2 0 0 00:03:49.649 asserts 497 497 497 0 n/a 00:03:49.649 00:03:49.649 Elapsed time = 0.969 seconds 00:03:49.649 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.649 EAL: request: mp_malloc_sync 00:03:49.649 EAL: No shared files mode enabled, IPC is disabled 00:03:49.649 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.649 EAL: No shared files mode enabled, IPC is disabled 00:03:49.649 EAL: No shared files mode enabled, IPC is disabled 00:03:49.649 EAL: No shared files mode enabled, IPC is disabled 00:03:49.649 00:03:49.649 real 0m1.078s 00:03:49.649 user 0m0.629s 00:03:49.649 sys 0m0.421s 00:03:49.649 02:58:20 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.649 02:58:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.649 ************************************ 00:03:49.649 END TEST env_vtophys 00:03:49.649 ************************************ 00:03:49.649 02:58:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.649 02:58:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.649 02:58:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.649 02:58:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.649 ************************************ 00:03:49.649 START TEST env_pci 00:03:49.649 ************************************ 00:03:49.649 02:58:20 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.649 00:03:49.649 00:03:49.649 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.649 http://cunit.sourceforge.net/ 00:03:49.649 00:03:49.649 00:03:49.649 Suite: pci 00:03:49.649 Test: pci_hook ...[2024-05-15 02:58:20.679838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 854375 has claimed it 00:03:49.649 EAL: Cannot find device (10000:00:01.0) 00:03:49.649 EAL: Failed to attach device on primary process 00:03:49.649 passed 00:03:49.649 00:03:49.649 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.649 suites 1 1 n/a 0 0 00:03:49.649 tests 1 1 1 0 0 00:03:49.649 asserts 25 25 25 0 n/a 00:03:49.649 00:03:49.649 Elapsed time = 0.026 seconds 00:03:49.649 00:03:49.649 real 0m0.046s 00:03:49.649 user 0m0.016s 00:03:49.649 sys 0m0.030s 00:03:49.649 02:58:20 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.649 02:58:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.649 ************************************ 00:03:49.649 END TEST env_pci 00:03:49.649 ************************************ 00:03:49.649 02:58:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.649 02:58:20 env -- env/env.sh@15 -- # uname 00:03:49.649 02:58:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.649 02:58:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.649 02:58:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.649 02:58:20 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:49.649 02:58:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.649 02:58:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.649 ************************************ 00:03:49.649 START TEST env_dpdk_post_init 00:03:49.649 ************************************ 00:03:49.649 02:58:20 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.906 EAL: Detected CPU lcores: 96 00:03:49.906 EAL: Detected NUMA nodes: 2 00:03:49.906 EAL: Detected shared linkage of DPDK 00:03:49.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.906 EAL: Selected IOVA mode 'VA' 00:03:49.906 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.906 EAL: VFIO support initialized 00:03:49.906 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.906 EAL: Using IOMMU type 1 (Type 1) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:49.906 EAL: Ignore mapping IO port bar(1) 00:03:49.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:50.843 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:50.843 EAL: Ignore mapping IO port bar(1) 00:03:50.843 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:54.128 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:54.128 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:54.128 Starting DPDK initialization... 00:03:54.128 Starting SPDK post initialization... 00:03:54.128 SPDK NVMe probe 00:03:54.128 Attaching to 0000:5e:00.0 00:03:54.128 Attached to 0000:5e:00.0 00:03:54.128 Cleaning up... 00:03:54.128 00:03:54.128 real 0m4.359s 00:03:54.128 user 0m3.309s 00:03:54.128 sys 0m0.122s 00:03:54.128 02:58:25 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.128 02:58:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.128 ************************************ 00:03:54.128 END TEST env_dpdk_post_init 00:03:54.128 ************************************ 00:03:54.128 02:58:25 env -- env/env.sh@26 -- # uname 00:03:54.128 02:58:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:54.128 02:58:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.128 02:58:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.128 02:58:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.128 02:58:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.128 ************************************ 00:03:54.128 START TEST env_mem_callbacks 00:03:54.128 ************************************ 00:03:54.128 02:58:25 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.128 EAL: Detected CPU lcores: 96 00:03:54.128 EAL: Detected NUMA nodes: 2 00:03:54.128 EAL: Detected shared linkage of DPDK 00:03:54.128 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:54.128 EAL: Selected IOVA mode 'VA' 00:03:54.128 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.128 EAL: VFIO support initialized 00:03:54.128 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:54.128 00:03:54.128 00:03:54.128 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.128 http://cunit.sourceforge.net/ 00:03:54.128 00:03:54.128 00:03:54.128 Suite: memory 00:03:54.128 Test: test ... 00:03:54.128 register 0x200000200000 2097152 00:03:54.128 malloc 3145728 00:03:54.128 register 0x200000400000 4194304 00:03:54.128 buf 0x200000500000 len 3145728 PASSED 00:03:54.128 malloc 64 00:03:54.128 buf 0x2000004fff40 len 64 PASSED 00:03:54.128 malloc 4194304 00:03:54.128 register 0x200000800000 6291456 00:03:54.129 buf 0x200000a00000 len 4194304 PASSED 00:03:54.129 free 0x200000500000 3145728 00:03:54.129 free 0x2000004fff40 64 00:03:54.129 unregister 0x200000400000 4194304 PASSED 00:03:54.129 free 0x200000a00000 4194304 00:03:54.129 unregister 0x200000800000 6291456 PASSED 00:03:54.129 malloc 8388608 00:03:54.129 register 0x200000400000 10485760 00:03:54.129 buf 0x200000600000 len 8388608 PASSED 00:03:54.129 free 0x200000600000 8388608 00:03:54.129 unregister 0x200000400000 10485760 PASSED 00:03:54.129 passed 00:03:54.129 00:03:54.129 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.129 suites 1 1 n/a 0 0 00:03:54.129 tests 1 1 1 0 0 00:03:54.129 asserts 15 15 15 0 n/a 00:03:54.129 00:03:54.129 Elapsed time = 0.006 seconds 00:03:54.388 00:03:54.388 real 0m0.059s 00:03:54.388 user 0m0.025s 00:03:54.388 sys 0m0.034s 00:03:54.388 02:58:25 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.388 02:58:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:54.388 ************************************ 00:03:54.388 END TEST env_mem_callbacks 00:03:54.388 ************************************ 00:03:54.388 00:03:54.388 real 0m6.156s 00:03:54.388 user 0m4.303s 00:03:54.388 sys 0m0.910s 00:03:54.388 02:58:25 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.388 02:58:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.388 ************************************ 00:03:54.388 END TEST env 00:03:54.388 ************************************ 00:03:54.388 02:58:25 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.388 02:58:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.388 02:58:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.388 02:58:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.388 ************************************ 00:03:54.388 START TEST rpc 00:03:54.388 ************************************ 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.388 * Looking for test storage... 00:03:54.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.388 02:58:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=855194 00:03:54.388 02:58:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.388 02:58:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 855194 00:03:54.388 02:58:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@827 -- # '[' -z 855194 ']' 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:54.388 02:58:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.388 [2024-05-15 02:58:25.530697] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:03:54.388 [2024-05-15 02:58:25.530741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855194 ] 00:03:54.647 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.647 [2024-05-15 02:58:25.585303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.647 [2024-05-15 02:58:25.666640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:54.647 [2024-05-15 02:58:25.666673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 855194' to capture a snapshot of events at runtime. 00:03:54.647 [2024-05-15 02:58:25.666681] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:54.647 [2024-05-15 02:58:25.666689] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:54.647 [2024-05-15 02:58:25.666695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid855194 for offline analysis/debug. 00:03:54.647 [2024-05-15 02:58:25.666712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.215 02:58:26 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:55.215 02:58:26 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:55.215 02:58:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.215 02:58:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.215 02:58:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:55.215 02:58:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:55.215 02:58:26 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.215 02:58:26 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.215 02:58:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.215 ************************************ 00:03:55.215 START TEST rpc_integrity 00:03:55.215 ************************************ 00:03:55.215 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:55.215 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.215 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.215 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.216 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.216 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.216 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.475 { 00:03:55.475 "name": "Malloc0", 00:03:55.475 "aliases": [ 00:03:55.475 "8016bb5b-741b-48f7-93af-5e321a17f173" 00:03:55.475 ], 00:03:55.475 "product_name": "Malloc disk", 00:03:55.475 "block_size": 512, 00:03:55.475 "num_blocks": 16384, 00:03:55.475 "uuid": "8016bb5b-741b-48f7-93af-5e321a17f173", 00:03:55.475 "assigned_rate_limits": { 00:03:55.475 "rw_ios_per_sec": 0, 00:03:55.475 "rw_mbytes_per_sec": 0, 00:03:55.475 "r_mbytes_per_sec": 0, 00:03:55.475 "w_mbytes_per_sec": 0 00:03:55.475 }, 00:03:55.475 "claimed": false, 00:03:55.475 "zoned": false, 00:03:55.475 "supported_io_types": { 00:03:55.475 "read": true, 00:03:55.475 "write": true, 00:03:55.475 "unmap": true, 00:03:55.475 "write_zeroes": true, 00:03:55.475 "flush": true, 00:03:55.475 "reset": true, 00:03:55.475 "compare": false, 00:03:55.475 "compare_and_write": false, 00:03:55.475 "abort": true, 00:03:55.475 "nvme_admin": false, 00:03:55.475 "nvme_io": false 00:03:55.475 }, 00:03:55.475 "memory_domains": [ 00:03:55.475 { 00:03:55.475 "dma_device_id": "system", 00:03:55.475 "dma_device_type": 1 00:03:55.475 }, 00:03:55.475 { 00:03:55.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.475 "dma_device_type": 2 00:03:55.475 } 00:03:55.475 ], 00:03:55.475 "driver_specific": {} 00:03:55.475 } 00:03:55.475 ]' 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.475 [2024-05-15 02:58:26.489706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.475 [2024-05-15 02:58:26.489734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.475 [2024-05-15 02:58:26.489746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1d1a0 00:03:55.475 [2024-05-15 02:58:26.489752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.475 [2024-05-15 02:58:26.490819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.475 [2024-05-15 02:58:26.490840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.475 Passthru0 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.475 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.475 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.475 { 00:03:55.475 "name": "Malloc0", 00:03:55.475 "aliases": [ 00:03:55.475 "8016bb5b-741b-48f7-93af-5e321a17f173" 00:03:55.475 ], 00:03:55.475 "product_name": "Malloc disk", 00:03:55.475 "block_size": 512, 00:03:55.475 "num_blocks": 16384, 00:03:55.475 "uuid": "8016bb5b-741b-48f7-93af-5e321a17f173", 00:03:55.475 "assigned_rate_limits": { 00:03:55.475 "rw_ios_per_sec": 0, 00:03:55.475 "rw_mbytes_per_sec": 0, 00:03:55.475 "r_mbytes_per_sec": 0, 00:03:55.475 "w_mbytes_per_sec": 0 00:03:55.475 }, 00:03:55.475 "claimed": true, 00:03:55.475 "claim_type": "exclusive_write", 00:03:55.475 "zoned": false, 00:03:55.475 "supported_io_types": { 00:03:55.475 "read": true, 00:03:55.475 "write": true, 00:03:55.475 "unmap": true, 00:03:55.475 "write_zeroes": true, 00:03:55.475 "flush": true, 00:03:55.475 "reset": true, 00:03:55.475 "compare": false, 00:03:55.475 "compare_and_write": false, 00:03:55.475 "abort": true, 00:03:55.475 "nvme_admin": false, 00:03:55.475 "nvme_io": false 00:03:55.475 }, 00:03:55.475 "memory_domains": [ 00:03:55.475 { 00:03:55.475 "dma_device_id": "system", 00:03:55.475 "dma_device_type": 1 00:03:55.475 }, 00:03:55.475 { 00:03:55.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.475 "dma_device_type": 2 00:03:55.475 } 00:03:55.475 ], 00:03:55.475 "driver_specific": {} 00:03:55.475 }, 00:03:55.475 { 00:03:55.475 "name": "Passthru0", 00:03:55.475 "aliases": [ 00:03:55.475 "2613b022-f72a-5e66-8d03-76985721f562" 00:03:55.475 ], 00:03:55.475 "product_name": "passthru", 00:03:55.475 "block_size": 512, 00:03:55.475 "num_blocks": 16384, 00:03:55.475 "uuid": "2613b022-f72a-5e66-8d03-76985721f562", 00:03:55.475 "assigned_rate_limits": { 00:03:55.475 "rw_ios_per_sec": 0, 00:03:55.475 "rw_mbytes_per_sec": 0, 00:03:55.475 "r_mbytes_per_sec": 0, 00:03:55.475 "w_mbytes_per_sec": 0 00:03:55.475 }, 00:03:55.475 "claimed": false, 00:03:55.475 "zoned": false, 00:03:55.475 "supported_io_types": { 00:03:55.475 "read": true, 00:03:55.475 "write": true, 00:03:55.475 "unmap": true, 00:03:55.475 "write_zeroes": true, 00:03:55.475 "flush": true, 00:03:55.475 "reset": true, 00:03:55.475 "compare": false, 00:03:55.475 "compare_and_write": false, 00:03:55.475 "abort": true, 00:03:55.475 "nvme_admin": false, 00:03:55.475 "nvme_io": false 00:03:55.475 }, 00:03:55.475 "memory_domains": [ 00:03:55.475 { 00:03:55.475 "dma_device_id": "system", 00:03:55.475 "dma_device_type": 1 00:03:55.475 }, 00:03:55.475 { 00:03:55.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.475 "dma_device_type": 2 00:03:55.475 } 00:03:55.475 ], 00:03:55.475 "driver_specific": { 00:03:55.475 "passthru": { 00:03:55.475 "name": "Passthru0", 00:03:55.476 "base_bdev_name": "Malloc0" 00:03:55.476 } 00:03:55.476 } 00:03:55.476 } 00:03:55.476 ]' 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.476 02:58:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.476 00:03:55.476 real 0m0.265s 00:03:55.476 user 0m0.165s 00:03:55.476 sys 0m0.030s 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.476 02:58:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.476 ************************************ 00:03:55.476 END TEST rpc_integrity 00:03:55.476 ************************************ 00:03:55.735 02:58:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:55.735 02:58:26 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.735 02:58:26 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.735 02:58:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 ************************************ 00:03:55.735 START TEST rpc_plugins 00:03:55.735 ************************************ 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:55.735 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.735 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:55.735 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.735 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:55.735 { 00:03:55.735 "name": "Malloc1", 00:03:55.735 "aliases": [ 00:03:55.735 "7fd49e0a-c104-4b31-a557-8259c13658f3" 00:03:55.735 ], 00:03:55.735 "product_name": "Malloc disk", 00:03:55.735 "block_size": 4096, 00:03:55.736 "num_blocks": 256, 00:03:55.736 "uuid": "7fd49e0a-c104-4b31-a557-8259c13658f3", 00:03:55.736 "assigned_rate_limits": { 00:03:55.736 "rw_ios_per_sec": 0, 00:03:55.736 "rw_mbytes_per_sec": 0, 00:03:55.736 "r_mbytes_per_sec": 0, 00:03:55.736 "w_mbytes_per_sec": 0 00:03:55.736 }, 00:03:55.736 "claimed": false, 00:03:55.736 "zoned": false, 00:03:55.736 "supported_io_types": { 00:03:55.736 "read": true, 00:03:55.736 "write": true, 00:03:55.736 "unmap": true, 00:03:55.736 "write_zeroes": true, 00:03:55.736 "flush": true, 00:03:55.736 "reset": true, 00:03:55.736 "compare": false, 00:03:55.736 "compare_and_write": false, 00:03:55.736 "abort": true, 00:03:55.736 "nvme_admin": false, 00:03:55.736 "nvme_io": false 00:03:55.736 }, 00:03:55.736 "memory_domains": [ 00:03:55.736 { 00:03:55.736 "dma_device_id": "system", 00:03:55.736 "dma_device_type": 1 00:03:55.736 }, 00:03:55.736 { 00:03:55.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.736 "dma_device_type": 2 00:03:55.736 } 00:03:55.736 ], 00:03:55.736 "driver_specific": {} 00:03:55.736 } 00:03:55.736 ]' 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:55.736 02:58:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:55.736 00:03:55.736 real 0m0.128s 00:03:55.736 user 0m0.081s 00:03:55.736 sys 0m0.010s 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.736 02:58:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 ************************************ 00:03:55.736 END TEST rpc_plugins 00:03:55.736 ************************************ 00:03:55.736 02:58:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:55.736 02:58:26 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.736 02:58:26 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.736 02:58:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 START TEST rpc_trace_cmd_test 00:03:55.995 ************************************ 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:55.995 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid855194", 00:03:55.995 "tpoint_group_mask": "0x8", 00:03:55.995 "iscsi_conn": { 00:03:55.995 "mask": "0x2", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "scsi": { 00:03:55.995 "mask": "0x4", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "bdev": { 00:03:55.995 "mask": "0x8", 00:03:55.995 "tpoint_mask": "0xffffffffffffffff" 00:03:55.995 }, 00:03:55.995 "nvmf_rdma": { 00:03:55.995 "mask": "0x10", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "nvmf_tcp": { 00:03:55.995 "mask": "0x20", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "ftl": { 00:03:55.995 "mask": "0x40", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "blobfs": { 00:03:55.995 "mask": "0x80", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "dsa": { 00:03:55.995 "mask": "0x200", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "thread": { 00:03:55.995 "mask": "0x400", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "nvme_pcie": { 00:03:55.995 "mask": "0x800", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "iaa": { 00:03:55.995 "mask": "0x1000", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "nvme_tcp": { 00:03:55.995 "mask": "0x2000", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "bdev_nvme": { 00:03:55.995 "mask": "0x4000", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 }, 00:03:55.995 "sock": { 00:03:55.995 "mask": "0x8000", 00:03:55.995 "tpoint_mask": "0x0" 00:03:55.995 } 00:03:55.995 }' 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:55.995 02:58:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:55.995 00:03:55.995 real 0m0.203s 00:03:55.995 user 0m0.176s 00:03:55.995 sys 0m0.018s 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.995 02:58:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 END TEST rpc_trace_cmd_test 00:03:55.995 ************************************ 00:03:55.995 02:58:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:55.995 02:58:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:55.995 02:58:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:55.995 02:58:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.995 02:58:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.995 02:58:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.254 ************************************ 00:03:56.254 START TEST rpc_daemon_integrity 00:03:56.254 ************************************ 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.254 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.255 { 00:03:56.255 "name": "Malloc2", 00:03:56.255 "aliases": [ 00:03:56.255 "3a37485b-5e2e-44e8-b58a-3afec0d50f8e" 00:03:56.255 ], 00:03:56.255 "product_name": "Malloc disk", 00:03:56.255 "block_size": 512, 00:03:56.255 "num_blocks": 16384, 00:03:56.255 "uuid": "3a37485b-5e2e-44e8-b58a-3afec0d50f8e", 00:03:56.255 "assigned_rate_limits": { 00:03:56.255 "rw_ios_per_sec": 0, 00:03:56.255 "rw_mbytes_per_sec": 0, 00:03:56.255 "r_mbytes_per_sec": 0, 00:03:56.255 "w_mbytes_per_sec": 0 00:03:56.255 }, 00:03:56.255 "claimed": false, 00:03:56.255 "zoned": false, 00:03:56.255 "supported_io_types": { 00:03:56.255 "read": true, 00:03:56.255 "write": true, 00:03:56.255 "unmap": true, 00:03:56.255 "write_zeroes": true, 00:03:56.255 "flush": true, 00:03:56.255 "reset": true, 00:03:56.255 "compare": false, 00:03:56.255 "compare_and_write": false, 00:03:56.255 "abort": true, 00:03:56.255 "nvme_admin": false, 00:03:56.255 "nvme_io": false 00:03:56.255 }, 00:03:56.255 "memory_domains": [ 00:03:56.255 { 00:03:56.255 "dma_device_id": "system", 00:03:56.255 "dma_device_type": 1 00:03:56.255 }, 00:03:56.255 { 00:03:56.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.255 "dma_device_type": 2 00:03:56.255 } 00:03:56.255 ], 00:03:56.255 "driver_specific": {} 00:03:56.255 } 00:03:56.255 ]' 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 [2024-05-15 02:58:27.291919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:56.255 [2024-05-15 02:58:27.291946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.255 [2024-05-15 02:58:27.291961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1e560 00:03:56.255 [2024-05-15 02:58:27.291967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.255 [2024-05-15 02:58:27.292953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.255 [2024-05-15 02:58:27.292974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.255 Passthru0 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.255 { 00:03:56.255 "name": "Malloc2", 00:03:56.255 "aliases": [ 00:03:56.255 "3a37485b-5e2e-44e8-b58a-3afec0d50f8e" 00:03:56.255 ], 00:03:56.255 "product_name": "Malloc disk", 00:03:56.255 "block_size": 512, 00:03:56.255 "num_blocks": 16384, 00:03:56.255 "uuid": "3a37485b-5e2e-44e8-b58a-3afec0d50f8e", 00:03:56.255 "assigned_rate_limits": { 00:03:56.255 "rw_ios_per_sec": 0, 00:03:56.255 "rw_mbytes_per_sec": 0, 00:03:56.255 "r_mbytes_per_sec": 0, 00:03:56.255 "w_mbytes_per_sec": 0 00:03:56.255 }, 00:03:56.255 "claimed": true, 00:03:56.255 "claim_type": "exclusive_write", 00:03:56.255 "zoned": false, 00:03:56.255 "supported_io_types": { 00:03:56.255 "read": true, 00:03:56.255 "write": true, 00:03:56.255 "unmap": true, 00:03:56.255 "write_zeroes": true, 00:03:56.255 "flush": true, 00:03:56.255 "reset": true, 00:03:56.255 "compare": false, 00:03:56.255 "compare_and_write": false, 00:03:56.255 "abort": true, 00:03:56.255 "nvme_admin": false, 00:03:56.255 "nvme_io": false 00:03:56.255 }, 00:03:56.255 "memory_domains": [ 00:03:56.255 { 00:03:56.255 "dma_device_id": "system", 00:03:56.255 "dma_device_type": 1 00:03:56.255 }, 00:03:56.255 { 00:03:56.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.255 "dma_device_type": 2 00:03:56.255 } 00:03:56.255 ], 00:03:56.255 "driver_specific": {} 00:03:56.255 }, 00:03:56.255 { 00:03:56.255 "name": "Passthru0", 00:03:56.255 "aliases": [ 00:03:56.255 "39f41778-de0d-599d-acdd-19c5244a9ddc" 00:03:56.255 ], 00:03:56.255 "product_name": "passthru", 00:03:56.255 "block_size": 512, 00:03:56.255 "num_blocks": 16384, 00:03:56.255 "uuid": "39f41778-de0d-599d-acdd-19c5244a9ddc", 00:03:56.255 "assigned_rate_limits": { 00:03:56.255 "rw_ios_per_sec": 0, 00:03:56.255 "rw_mbytes_per_sec": 0, 00:03:56.255 "r_mbytes_per_sec": 0, 00:03:56.255 "w_mbytes_per_sec": 0 00:03:56.255 }, 00:03:56.255 "claimed": false, 00:03:56.255 "zoned": false, 00:03:56.255 "supported_io_types": { 00:03:56.255 "read": true, 00:03:56.255 "write": true, 00:03:56.255 "unmap": true, 00:03:56.255 "write_zeroes": true, 00:03:56.255 "flush": true, 00:03:56.255 "reset": true, 00:03:56.255 "compare": false, 00:03:56.255 "compare_and_write": false, 00:03:56.255 "abort": true, 00:03:56.255 "nvme_admin": false, 00:03:56.255 "nvme_io": false 00:03:56.255 }, 00:03:56.255 "memory_domains": [ 00:03:56.255 { 00:03:56.255 "dma_device_id": "system", 00:03:56.255 "dma_device_type": 1 00:03:56.255 }, 00:03:56.255 { 00:03:56.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.255 "dma_device_type": 2 00:03:56.255 } 00:03:56.255 ], 00:03:56.255 "driver_specific": { 00:03:56.255 "passthru": { 00:03:56.255 "name": "Passthru0", 00:03:56.255 "base_bdev_name": "Malloc2" 00:03:56.255 } 00:03:56.255 } 00:03:56.255 } 00:03:56.255 ]' 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.255 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.514 02:58:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.514 00:03:56.514 real 0m0.259s 00:03:56.514 user 0m0.167s 00:03:56.514 sys 0m0.029s 00:03:56.514 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.514 02:58:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.514 ************************************ 00:03:56.514 END TEST rpc_daemon_integrity 00:03:56.515 ************************************ 00:03:56.515 02:58:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.515 02:58:27 rpc -- rpc/rpc.sh@84 -- # killprocess 855194 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@946 -- # '[' -z 855194 ']' 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@950 -- # kill -0 855194 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@951 -- # uname 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 855194 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 855194' 00:03:56.515 killing process with pid 855194 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@965 -- # kill 855194 00:03:56.515 02:58:27 rpc -- common/autotest_common.sh@970 -- # wait 855194 00:03:56.774 00:03:56.774 real 0m2.429s 00:03:56.774 user 0m3.137s 00:03:56.774 sys 0m0.604s 00:03:56.774 02:58:27 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.774 02:58:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.774 ************************************ 00:03:56.774 END TEST rpc 00:03:56.774 ************************************ 00:03:56.774 02:58:27 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.774 02:58:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.774 02:58:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.774 02:58:27 -- common/autotest_common.sh@10 -- # set +x 00:03:56.774 ************************************ 00:03:56.774 START TEST skip_rpc 00:03:56.774 ************************************ 00:03:56.774 02:58:27 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.033 * Looking for test storage... 00:03:57.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.033 02:58:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.033 02:58:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:57.033 02:58:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.033 02:58:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.033 02:58:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.033 02:58:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 ************************************ 00:03:57.033 START TEST skip_rpc 00:03:57.033 ************************************ 00:03:57.033 02:58:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:57.033 02:58:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=855834 00:03:57.033 02:58:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.033 02:58:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:57.033 02:58:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:57.033 [2024-05-15 02:58:28.081881] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:03:57.033 [2024-05-15 02:58:28.081917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855834 ] 00:03:57.033 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.033 [2024-05-15 02:58:28.134864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.293 [2024-05-15 02:58:28.206638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 855834 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 855834 ']' 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 855834 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 855834 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 855834' 00:04:02.605 killing process with pid 855834 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 855834 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 855834 00:04:02.605 00:04:02.605 real 0m5.392s 00:04:02.605 user 0m5.172s 00:04:02.605 sys 0m0.247s 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.605 02:58:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.605 ************************************ 00:04:02.605 END TEST skip_rpc 00:04:02.605 ************************************ 00:04:02.605 02:58:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:02.605 02:58:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.605 02:58:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.605 02:58:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.605 ************************************ 00:04:02.605 START TEST skip_rpc_with_json 00:04:02.605 ************************************ 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=856778 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 856778 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 856778 ']' 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:02.605 02:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.605 [2024-05-15 02:58:33.549203] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:02.605 [2024-05-15 02:58:33.549242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856778 ] 00:04:02.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.605 [2024-05-15 02:58:33.602272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.605 [2024-05-15 02:58:33.675413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.544 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:03.544 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:03.544 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:03.544 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.544 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.544 [2024-05-15 02:58:34.346131] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:03.544 request: 00:04:03.544 { 00:04:03.544 "trtype": "tcp", 00:04:03.544 "method": "nvmf_get_transports", 00:04:03.544 "req_id": 1 00:04:03.544 } 00:04:03.544 Got JSON-RPC error response 00:04:03.544 response: 00:04:03.544 { 00:04:03.544 "code": -19, 00:04:03.544 "message": "No such device" 00:04:03.544 } 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.545 [2024-05-15 02:58:34.354233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.545 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.545 { 00:04:03.545 "subsystems": [ 00:04:03.545 { 00:04:03.545 "subsystem": "vfio_user_target", 00:04:03.545 "config": null 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "keyring", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "iobuf", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "iobuf_set_options", 00:04:03.545 "params": { 00:04:03.545 "small_pool_count": 8192, 00:04:03.545 "large_pool_count": 1024, 00:04:03.545 "small_bufsize": 8192, 00:04:03.545 "large_bufsize": 135168 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "sock", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "sock_impl_set_options", 00:04:03.545 "params": { 00:04:03.545 "impl_name": "posix", 00:04:03.545 "recv_buf_size": 2097152, 00:04:03.545 "send_buf_size": 2097152, 00:04:03.545 "enable_recv_pipe": true, 00:04:03.545 "enable_quickack": false, 00:04:03.545 "enable_placement_id": 0, 00:04:03.545 "enable_zerocopy_send_server": true, 00:04:03.545 "enable_zerocopy_send_client": false, 00:04:03.545 "zerocopy_threshold": 0, 00:04:03.545 "tls_version": 0, 00:04:03.545 "enable_ktls": false 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "sock_impl_set_options", 00:04:03.545 "params": { 00:04:03.545 "impl_name": "ssl", 00:04:03.545 "recv_buf_size": 4096, 00:04:03.545 "send_buf_size": 4096, 00:04:03.545 "enable_recv_pipe": true, 00:04:03.545 "enable_quickack": false, 00:04:03.545 "enable_placement_id": 0, 00:04:03.545 "enable_zerocopy_send_server": true, 00:04:03.545 "enable_zerocopy_send_client": false, 00:04:03.545 "zerocopy_threshold": 0, 00:04:03.545 "tls_version": 0, 00:04:03.545 "enable_ktls": false 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "vmd", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "accel", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "accel_set_options", 00:04:03.545 "params": { 00:04:03.545 "small_cache_size": 128, 00:04:03.545 "large_cache_size": 16, 00:04:03.545 "task_count": 2048, 00:04:03.545 "sequence_count": 2048, 00:04:03.545 "buf_count": 2048 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "bdev", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "bdev_set_options", 00:04:03.545 "params": { 00:04:03.545 "bdev_io_pool_size": 65535, 00:04:03.545 "bdev_io_cache_size": 256, 00:04:03.545 "bdev_auto_examine": true, 00:04:03.545 "iobuf_small_cache_size": 128, 00:04:03.545 "iobuf_large_cache_size": 16 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "bdev_raid_set_options", 00:04:03.545 "params": { 00:04:03.545 "process_window_size_kb": 1024 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "bdev_iscsi_set_options", 00:04:03.545 "params": { 00:04:03.545 "timeout_sec": 30 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "bdev_nvme_set_options", 00:04:03.545 "params": { 00:04:03.545 "action_on_timeout": "none", 00:04:03.545 "timeout_us": 0, 00:04:03.545 "timeout_admin_us": 0, 00:04:03.545 "keep_alive_timeout_ms": 10000, 00:04:03.545 "arbitration_burst": 0, 00:04:03.545 "low_priority_weight": 0, 00:04:03.545 "medium_priority_weight": 0, 00:04:03.545 "high_priority_weight": 0, 00:04:03.545 "nvme_adminq_poll_period_us": 10000, 00:04:03.545 "nvme_ioq_poll_period_us": 0, 00:04:03.545 "io_queue_requests": 0, 00:04:03.545 "delay_cmd_submit": true, 00:04:03.545 "transport_retry_count": 4, 00:04:03.545 "bdev_retry_count": 3, 00:04:03.545 "transport_ack_timeout": 0, 00:04:03.545 "ctrlr_loss_timeout_sec": 0, 00:04:03.545 "reconnect_delay_sec": 0, 00:04:03.545 "fast_io_fail_timeout_sec": 0, 00:04:03.545 "disable_auto_failback": false, 00:04:03.545 "generate_uuids": false, 00:04:03.545 "transport_tos": 0, 00:04:03.545 "nvme_error_stat": false, 00:04:03.545 "rdma_srq_size": 0, 00:04:03.545 "io_path_stat": false, 00:04:03.545 "allow_accel_sequence": false, 00:04:03.545 "rdma_max_cq_size": 0, 00:04:03.545 "rdma_cm_event_timeout_ms": 0, 00:04:03.545 "dhchap_digests": [ 00:04:03.545 "sha256", 00:04:03.545 "sha384", 00:04:03.545 "sha512" 00:04:03.545 ], 00:04:03.545 "dhchap_dhgroups": [ 00:04:03.545 "null", 00:04:03.545 "ffdhe2048", 00:04:03.545 "ffdhe3072", 00:04:03.545 "ffdhe4096", 00:04:03.545 "ffdhe6144", 00:04:03.545 "ffdhe8192" 00:04:03.545 ] 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "bdev_nvme_set_hotplug", 00:04:03.545 "params": { 00:04:03.545 "period_us": 100000, 00:04:03.545 "enable": false 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "bdev_wait_for_examine" 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "scsi", 00:04:03.545 "config": null 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "scheduler", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "framework_set_scheduler", 00:04:03.545 "params": { 00:04:03.545 "name": "static" 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "vhost_scsi", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "vhost_blk", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "ublk", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "nbd", 00:04:03.545 "config": [] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "nvmf", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "nvmf_set_config", 00:04:03.545 "params": { 00:04:03.545 "discovery_filter": "match_any", 00:04:03.545 "admin_cmd_passthru": { 00:04:03.545 "identify_ctrlr": false 00:04:03.545 } 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "nvmf_set_max_subsystems", 00:04:03.545 "params": { 00:04:03.545 "max_subsystems": 1024 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "nvmf_set_crdt", 00:04:03.545 "params": { 00:04:03.545 "crdt1": 0, 00:04:03.545 "crdt2": 0, 00:04:03.545 "crdt3": 0 00:04:03.545 } 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "method": "nvmf_create_transport", 00:04:03.545 "params": { 00:04:03.545 "trtype": "TCP", 00:04:03.545 "max_queue_depth": 128, 00:04:03.545 "max_io_qpairs_per_ctrlr": 127, 00:04:03.545 "in_capsule_data_size": 4096, 00:04:03.545 "max_io_size": 131072, 00:04:03.545 "io_unit_size": 131072, 00:04:03.545 "max_aq_depth": 128, 00:04:03.545 "num_shared_buffers": 511, 00:04:03.545 "buf_cache_size": 4294967295, 00:04:03.545 "dif_insert_or_strip": false, 00:04:03.545 "zcopy": false, 00:04:03.545 "c2h_success": true, 00:04:03.545 "sock_priority": 0, 00:04:03.545 "abort_timeout_sec": 1, 00:04:03.545 "ack_timeout": 0, 00:04:03.545 "data_wr_pool_size": 0 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 }, 00:04:03.545 { 00:04:03.545 "subsystem": "iscsi", 00:04:03.545 "config": [ 00:04:03.545 { 00:04:03.545 "method": "iscsi_set_options", 00:04:03.545 "params": { 00:04:03.545 "node_base": "iqn.2016-06.io.spdk", 00:04:03.545 "max_sessions": 128, 00:04:03.545 "max_connections_per_session": 2, 00:04:03.545 "max_queue_depth": 64, 00:04:03.545 "default_time2wait": 2, 00:04:03.545 "default_time2retain": 20, 00:04:03.545 "first_burst_length": 8192, 00:04:03.545 "immediate_data": true, 00:04:03.545 "allow_duplicated_isid": false, 00:04:03.545 "error_recovery_level": 0, 00:04:03.545 "nop_timeout": 60, 00:04:03.545 "nop_in_interval": 30, 00:04:03.545 "disable_chap": false, 00:04:03.545 "require_chap": false, 00:04:03.545 "mutual_chap": false, 00:04:03.545 "chap_group": 0, 00:04:03.545 "max_large_datain_per_connection": 64, 00:04:03.545 "max_r2t_per_connection": 4, 00:04:03.545 "pdu_pool_size": 36864, 00:04:03.545 "immediate_data_pool_size": 16384, 00:04:03.545 "data_out_pool_size": 2048 00:04:03.545 } 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 } 00:04:03.545 ] 00:04:03.545 } 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 856778 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 856778 ']' 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 856778 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 856778 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 856778' 00:04:03.546 killing process with pid 856778 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 856778 00:04:03.546 02:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 856778 00:04:03.805 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=857025 00:04:03.805 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.806 02:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 857025 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 857025 ']' 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 857025 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 857025 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 857025' 00:04:09.082 killing process with pid 857025 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 857025 00:04:09.082 02:58:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 857025 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.342 00:04:09.342 real 0m6.790s 00:04:09.342 user 0m6.603s 00:04:09.342 sys 0m0.590s 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.342 ************************************ 00:04:09.342 END TEST skip_rpc_with_json 00:04:09.342 ************************************ 00:04:09.342 02:58:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.342 ************************************ 00:04:09.342 START TEST skip_rpc_with_delay 00:04:09.342 ************************************ 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.342 [2024-05-15 02:58:40.414896] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:09.342 [2024-05-15 02:58:40.414955] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:09.342 00:04:09.342 real 0m0.063s 00:04:09.342 user 0m0.044s 00:04:09.342 sys 0m0.019s 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.342 02:58:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:09.342 ************************************ 00:04:09.342 END TEST skip_rpc_with_delay 00:04:09.342 ************************************ 00:04:09.342 02:58:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:09.342 02:58:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:09.342 02:58:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.342 02:58:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.601 ************************************ 00:04:09.601 START TEST exit_on_failed_rpc_init 00:04:09.601 ************************************ 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=857996 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 857996 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 857996 ']' 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:09.601 02:58:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.601 [2024-05-15 02:58:40.555057] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:09.602 [2024-05-15 02:58:40.555099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857996 ] 00:04:09.602 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.602 [2024-05-15 02:58:40.609331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.602 [2024-05-15 02:58:40.679972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.541 [2024-05-15 02:58:41.404678] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:10.541 [2024-05-15 02:58:41.404721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858230 ] 00:04:10.541 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.541 [2024-05-15 02:58:41.456114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.541 [2024-05-15 02:58:41.529718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.541 [2024-05-15 02:58:41.529784] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.541 [2024-05-15 02:58:41.529793] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.541 [2024-05-15 02:58:41.529799] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 857996 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 857996 ']' 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 857996 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 857996 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 857996' 00:04:10.541 killing process with pid 857996 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 857996 00:04:10.541 02:58:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 857996 00:04:11.110 00:04:11.110 real 0m1.497s 00:04:11.110 user 0m1.752s 00:04:11.110 sys 0m0.381s 00:04:11.110 02:58:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.110 02:58:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.110 ************************************ 00:04:11.110 END TEST exit_on_failed_rpc_init 00:04:11.110 ************************************ 00:04:11.110 02:58:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.110 00:04:11.110 real 0m14.129s 00:04:11.110 user 0m13.721s 00:04:11.110 sys 0m1.486s 00:04:11.110 02:58:42 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.110 02:58:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.110 ************************************ 00:04:11.110 END TEST skip_rpc 00:04:11.110 ************************************ 00:04:11.110 02:58:42 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.110 02:58:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.110 02:58:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.110 02:58:42 -- common/autotest_common.sh@10 -- # set +x 00:04:11.110 ************************************ 00:04:11.110 START TEST rpc_client 00:04:11.110 ************************************ 00:04:11.110 02:58:42 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.110 * Looking for test storage... 00:04:11.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:11.110 02:58:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.110 OK 00:04:11.110 02:58:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.110 00:04:11.110 real 0m0.111s 00:04:11.110 user 0m0.055s 00:04:11.110 sys 0m0.063s 00:04:11.110 02:58:42 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.110 02:58:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:11.110 ************************************ 00:04:11.110 END TEST rpc_client 00:04:11.110 ************************************ 00:04:11.110 02:58:42 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.110 02:58:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.110 02:58:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.110 02:58:42 -- common/autotest_common.sh@10 -- # set +x 00:04:11.370 ************************************ 00:04:11.370 START TEST json_config 00:04:11.370 ************************************ 00:04:11.370 02:58:42 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.370 02:58:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.370 02:58:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.371 02:58:42 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.371 02:58:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.371 02:58:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.371 02:58:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.371 02:58:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.371 02:58:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.371 02:58:42 json_config -- paths/export.sh@5 -- # export PATH 00:04:11.371 02:58:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@47 -- # : 0 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:11.371 02:58:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:11.371 INFO: JSON configuration test init 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.371 02:58:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.371 02:58:42 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.371 02:58:42 json_config -- json_config/common.sh@10 -- # shift 00:04:11.371 02:58:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.371 02:58:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.371 02:58:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.371 02:58:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.371 02:58:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.371 02:58:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=858563 00:04:11.371 02:58:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.371 Waiting for target to run... 00:04:11.371 02:58:42 json_config -- json_config/common.sh@25 -- # waitforlisten 858563 /var/tmp/spdk_tgt.sock 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 858563 ']' 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:11.371 02:58:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:11.371 02:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.371 [2024-05-15 02:58:42.447477] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:11.371 [2024-05-15 02:58:42.447530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858563 ] 00:04:11.371 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.630 [2024-05-15 02:58:42.712338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.630 [2024-05-15 02:58:42.782417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:12.198 02:58:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.198 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.198 02:58:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:12.198 02:58:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:12.198 02:58:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:15.499 02:58:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.499 02:58:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:15.499 02:58:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:15.500 02:58:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.500 02:58:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.758 MallocForNvmf0 00:04:15.758 02:58:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.758 02:58:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.758 MallocForNvmf1 00:04:15.758 02:58:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.758 02:58:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.017 [2024-05-15 02:58:47.055932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.017 02:58:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.017 02:58:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.275 02:58:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.275 02:58:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.275 02:58:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.275 02:58:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.534 02:58:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.534 02:58:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.793 [2024-05-15 02:58:47.729737] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:16.793 [2024-05-15 02:58:47.730053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.793 02:58:47 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:16.793 02:58:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.793 02:58:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 02:58:47 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:16.793 02:58:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.793 02:58:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 02:58:47 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:16.793 02:58:47 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.793 02:58:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.052 MallocBdevForConfigChangeCheck 00:04:17.052 02:58:47 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:17.052 02:58:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.052 02:58:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 02:58:48 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:17.052 02:58:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.312 02:58:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:17.312 INFO: shutting down applications... 00:04:17.312 02:58:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:17.312 02:58:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:17.312 02:58:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:17.312 02:58:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.219 Calling clear_iscsi_subsystem 00:04:19.219 Calling clear_nvmf_subsystem 00:04:19.219 Calling clear_nbd_subsystem 00:04:19.219 Calling clear_ublk_subsystem 00:04:19.219 Calling clear_vhost_blk_subsystem 00:04:19.219 Calling clear_vhost_scsi_subsystem 00:04:19.219 Calling clear_bdev_subsystem 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.219 02:58:49 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.219 02:58:50 json_config -- json_config/json_config.sh@345 -- # break 00:04:19.219 02:58:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:19.219 02:58:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:19.219 02:58:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.219 02:58:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.219 02:58:50 json_config -- json_config/common.sh@35 -- # [[ -n 858563 ]] 00:04:19.219 02:58:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 858563 00:04:19.219 [2024-05-15 02:58:50.284338] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:19.219 02:58:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.219 02:58:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.219 02:58:50 json_config -- json_config/common.sh@41 -- # kill -0 858563 00:04:19.219 02:58:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.788 02:58:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.788 02:58:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.788 02:58:50 json_config -- json_config/common.sh@41 -- # kill -0 858563 00:04:19.788 02:58:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.788 02:58:50 json_config -- json_config/common.sh@43 -- # break 00:04:19.788 02:58:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.788 02:58:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.788 SPDK target shutdown done 00:04:19.788 02:58:50 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:19.788 INFO: relaunching applications... 00:04:19.788 02:58:50 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.788 02:58:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.788 02:58:50 json_config -- json_config/common.sh@10 -- # shift 00:04:19.788 02:58:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.788 02:58:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.788 02:58:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.788 02:58:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.788 02:58:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.788 02:58:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=860069 00:04:19.788 02:58:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.788 Waiting for target to run... 00:04:19.788 02:58:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.788 02:58:50 json_config -- json_config/common.sh@25 -- # waitforlisten 860069 /var/tmp/spdk_tgt.sock 00:04:19.788 02:58:50 json_config -- common/autotest_common.sh@827 -- # '[' -z 860069 ']' 00:04:19.788 02:58:50 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.788 02:58:50 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:19.788 02:58:50 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.789 02:58:50 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:19.789 02:58:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.789 [2024-05-15 02:58:50.841105] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:19.789 [2024-05-15 02:58:50.841162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860069 ] 00:04:19.789 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.048 [2024-05-15 02:58:51.123846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.048 [2024-05-15 02:58:51.191065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.346 [2024-05-15 02:58:54.195018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.346 [2024-05-15 02:58:54.227017] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:23.346 [2024-05-15 02:58:54.227326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.346 02:58:54 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:23.346 02:58:54 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:23.346 02:58:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:23.346 00:04:23.346 02:58:54 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:23.346 02:58:54 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:23.346 INFO: Checking if target configuration is the same... 00:04:23.346 02:58:54 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.346 02:58:54 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:23.346 02:58:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.346 + '[' 2 -ne 2 ']' 00:04:23.346 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.346 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.346 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.346 +++ basename /dev/fd/62 00:04:23.346 ++ mktemp /tmp/62.XXX 00:04:23.346 + tmp_file_1=/tmp/62.Dci 00:04:23.346 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.346 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.346 + tmp_file_2=/tmp/spdk_tgt_config.json.OXN 00:04:23.346 + ret=0 00:04:23.346 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.605 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.605 + diff -u /tmp/62.Dci /tmp/spdk_tgt_config.json.OXN 00:04:23.605 + echo 'INFO: JSON config files are the same' 00:04:23.605 INFO: JSON config files are the same 00:04:23.605 + rm /tmp/62.Dci /tmp/spdk_tgt_config.json.OXN 00:04:23.605 + exit 0 00:04:23.605 02:58:54 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:23.605 02:58:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:23.605 INFO: changing configuration and checking if this can be detected... 00:04:23.605 02:58:54 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.605 02:58:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.864 02:58:54 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.864 02:58:54 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:23.864 02:58:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.864 + '[' 2 -ne 2 ']' 00:04:23.864 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.864 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.864 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.864 +++ basename /dev/fd/62 00:04:23.864 ++ mktemp /tmp/62.XXX 00:04:23.864 + tmp_file_1=/tmp/62.2EB 00:04:23.864 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.864 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.864 + tmp_file_2=/tmp/spdk_tgt_config.json.VYp 00:04:23.864 + ret=0 00:04:23.864 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.123 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.123 + diff -u /tmp/62.2EB /tmp/spdk_tgt_config.json.VYp 00:04:24.123 + ret=1 00:04:24.123 + echo '=== Start of file: /tmp/62.2EB ===' 00:04:24.123 + cat /tmp/62.2EB 00:04:24.123 + echo '=== End of file: /tmp/62.2EB ===' 00:04:24.123 + echo '' 00:04:24.123 + echo '=== Start of file: /tmp/spdk_tgt_config.json.VYp ===' 00:04:24.123 + cat /tmp/spdk_tgt_config.json.VYp 00:04:24.123 + echo '=== End of file: /tmp/spdk_tgt_config.json.VYp ===' 00:04:24.123 + echo '' 00:04:24.123 + rm /tmp/62.2EB /tmp/spdk_tgt_config.json.VYp 00:04:24.123 + exit 1 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:24.123 INFO: configuration change detected. 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@317 -- # [[ -n 860069 ]] 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.123 02:58:55 json_config -- json_config/json_config.sh@323 -- # killprocess 860069 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@946 -- # '[' -z 860069 ']' 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@950 -- # kill -0 860069 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@951 -- # uname 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 860069 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 860069' 00:04:24.123 killing process with pid 860069 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@965 -- # kill 860069 00:04:24.123 [2024-05-15 02:58:55.272626] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:24.123 02:58:55 json_config -- common/autotest_common.sh@970 -- # wait 860069 00:04:26.066 02:58:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.066 02:58:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:26.066 02:58:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.066 02:58:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.066 02:58:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:26.066 02:58:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:26.066 INFO: Success 00:04:26.066 00:04:26.066 real 0m14.517s 00:04:26.066 user 0m15.418s 00:04:26.066 sys 0m1.665s 00:04:26.066 02:58:56 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.066 02:58:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.066 ************************************ 00:04:26.066 END TEST json_config 00:04:26.066 ************************************ 00:04:26.066 02:58:56 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.066 02:58:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.066 02:58:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.066 02:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:26.066 ************************************ 00:04:26.066 START TEST json_config_extra_key 00:04:26.066 ************************************ 00:04:26.066 02:58:56 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.066 02:58:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.066 02:58:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.066 02:58:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.066 02:58:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.066 02:58:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.066 02:58:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.066 02:58:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:26.066 02:58:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:26.066 02:58:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:26.066 INFO: launching applications... 00:04:26.066 02:58:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.066 02:58:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.067 02:58:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.067 02:58:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=861115 00:04:26.067 02:58:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.067 Waiting for target to run... 00:04:26.067 02:58:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 861115 /var/tmp/spdk_tgt.sock 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 861115 ']' 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.067 02:58:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:26.067 02:58:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.067 [2024-05-15 02:58:57.030317] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:26.067 [2024-05-15 02:58:57.030367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861115 ] 00:04:26.067 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.326 [2024-05-15 02:58:57.470822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.585 [2024-05-15 02:58:57.554224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.844 02:58:57 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:26.844 02:58:57 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:26.844 00:04:26.844 02:58:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:26.844 INFO: shutting down applications... 00:04:26.844 02:58:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 861115 ]] 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 861115 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 861115 00:04:26.844 02:58:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 861115 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.412 02:58:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.412 SPDK target shutdown done 00:04:27.412 02:58:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:27.412 Success 00:04:27.412 00:04:27.412 real 0m1.433s 00:04:27.412 user 0m1.075s 00:04:27.412 sys 0m0.515s 00:04:27.412 02:58:58 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.412 02:58:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.412 ************************************ 00:04:27.412 END TEST json_config_extra_key 00:04:27.412 ************************************ 00:04:27.412 02:58:58 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.412 02:58:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.412 02:58:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.412 02:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:27.412 ************************************ 00:04:27.412 START TEST alias_rpc 00:04:27.412 ************************************ 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.412 * Looking for test storage... 00:04:27.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:27.412 02:58:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.412 02:58:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=861405 00:04:27.412 02:58:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.412 02:58:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 861405 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 861405 ']' 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:27.412 02:58:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.412 [2024-05-15 02:58:58.536108] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:27.412 [2024-05-15 02:58:58.536161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861405 ] 00:04:27.412 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.671 [2024-05-15 02:58:58.591018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.671 [2024-05-15 02:58:58.665356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.239 02:58:59 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:28.239 02:58:59 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:28.239 02:58:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:28.498 02:58:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 861405 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 861405 ']' 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 861405 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 861405 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 861405' 00:04:28.498 killing process with pid 861405 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@965 -- # kill 861405 00:04:28.498 02:58:59 alias_rpc -- common/autotest_common.sh@970 -- # wait 861405 00:04:28.757 00:04:28.757 real 0m1.508s 00:04:28.757 user 0m1.632s 00:04:28.757 sys 0m0.401s 00:04:28.758 02:58:59 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.758 02:58:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.758 ************************************ 00:04:28.758 END TEST alias_rpc 00:04:28.758 ************************************ 00:04:29.017 02:58:59 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:29.017 02:58:59 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.017 02:58:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.017 02:58:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.017 02:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.017 ************************************ 00:04:29.017 START TEST spdkcli_tcp 00:04:29.017 ************************************ 00:04:29.017 02:58:59 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:29.017 * Looking for test storage... 00:04:29.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=861834 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 861834 00:04:29.017 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 861834 ']' 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.017 02:59:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.017 [2024-05-15 02:59:00.109703] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:29.017 [2024-05-15 02:59:00.109757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861834 ] 00:04:29.017 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.017 [2024-05-15 02:59:00.164288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.277 [2024-05-15 02:59:00.238599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.277 [2024-05-15 02:59:00.238602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.845 02:59:00 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:29.845 02:59:00 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:29.845 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=861913 00:04:29.845 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:29.845 02:59:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.105 [ 00:04:30.105 "bdev_malloc_delete", 00:04:30.105 "bdev_malloc_create", 00:04:30.105 "bdev_null_resize", 00:04:30.105 "bdev_null_delete", 00:04:30.105 "bdev_null_create", 00:04:30.105 "bdev_nvme_cuse_unregister", 00:04:30.105 "bdev_nvme_cuse_register", 00:04:30.105 "bdev_opal_new_user", 00:04:30.105 "bdev_opal_set_lock_state", 00:04:30.105 "bdev_opal_delete", 00:04:30.105 "bdev_opal_get_info", 00:04:30.105 "bdev_opal_create", 00:04:30.105 "bdev_nvme_opal_revert", 00:04:30.105 "bdev_nvme_opal_init", 00:04:30.105 "bdev_nvme_send_cmd", 00:04:30.105 "bdev_nvme_get_path_iostat", 00:04:30.105 "bdev_nvme_get_mdns_discovery_info", 00:04:30.105 "bdev_nvme_stop_mdns_discovery", 00:04:30.105 "bdev_nvme_start_mdns_discovery", 00:04:30.105 "bdev_nvme_set_multipath_policy", 00:04:30.105 "bdev_nvme_set_preferred_path", 00:04:30.105 "bdev_nvme_get_io_paths", 00:04:30.105 "bdev_nvme_remove_error_injection", 00:04:30.105 "bdev_nvme_add_error_injection", 00:04:30.105 "bdev_nvme_get_discovery_info", 00:04:30.105 "bdev_nvme_stop_discovery", 00:04:30.105 "bdev_nvme_start_discovery", 00:04:30.105 "bdev_nvme_get_controller_health_info", 00:04:30.105 "bdev_nvme_disable_controller", 00:04:30.105 "bdev_nvme_enable_controller", 00:04:30.105 "bdev_nvme_reset_controller", 00:04:30.105 "bdev_nvme_get_transport_statistics", 00:04:30.105 "bdev_nvme_apply_firmware", 00:04:30.105 "bdev_nvme_detach_controller", 00:04:30.105 "bdev_nvme_get_controllers", 00:04:30.105 "bdev_nvme_attach_controller", 00:04:30.105 "bdev_nvme_set_hotplug", 00:04:30.105 "bdev_nvme_set_options", 00:04:30.105 "bdev_passthru_delete", 00:04:30.105 "bdev_passthru_create", 00:04:30.105 "bdev_lvol_check_shallow_copy", 00:04:30.105 "bdev_lvol_start_shallow_copy", 00:04:30.105 "bdev_lvol_grow_lvstore", 00:04:30.105 "bdev_lvol_get_lvols", 00:04:30.105 "bdev_lvol_get_lvstores", 00:04:30.105 "bdev_lvol_delete", 00:04:30.105 "bdev_lvol_set_read_only", 00:04:30.105 "bdev_lvol_resize", 00:04:30.105 "bdev_lvol_decouple_parent", 00:04:30.105 "bdev_lvol_inflate", 00:04:30.105 "bdev_lvol_rename", 00:04:30.105 "bdev_lvol_clone_bdev", 00:04:30.105 "bdev_lvol_clone", 00:04:30.105 "bdev_lvol_snapshot", 00:04:30.105 "bdev_lvol_create", 00:04:30.105 "bdev_lvol_delete_lvstore", 00:04:30.105 "bdev_lvol_rename_lvstore", 00:04:30.105 "bdev_lvol_create_lvstore", 00:04:30.105 "bdev_raid_set_options", 00:04:30.105 "bdev_raid_remove_base_bdev", 00:04:30.105 "bdev_raid_add_base_bdev", 00:04:30.105 "bdev_raid_delete", 00:04:30.105 "bdev_raid_create", 00:04:30.105 "bdev_raid_get_bdevs", 00:04:30.105 "bdev_error_inject_error", 00:04:30.105 "bdev_error_delete", 00:04:30.105 "bdev_error_create", 00:04:30.105 "bdev_split_delete", 00:04:30.105 "bdev_split_create", 00:04:30.106 "bdev_delay_delete", 00:04:30.106 "bdev_delay_create", 00:04:30.106 "bdev_delay_update_latency", 00:04:30.106 "bdev_zone_block_delete", 00:04:30.106 "bdev_zone_block_create", 00:04:30.106 "blobfs_create", 00:04:30.106 "blobfs_detect", 00:04:30.106 "blobfs_set_cache_size", 00:04:30.106 "bdev_aio_delete", 00:04:30.106 "bdev_aio_rescan", 00:04:30.106 "bdev_aio_create", 00:04:30.106 "bdev_ftl_set_property", 00:04:30.106 "bdev_ftl_get_properties", 00:04:30.106 "bdev_ftl_get_stats", 00:04:30.106 "bdev_ftl_unmap", 00:04:30.106 "bdev_ftl_unload", 00:04:30.106 "bdev_ftl_delete", 00:04:30.106 "bdev_ftl_load", 00:04:30.106 "bdev_ftl_create", 00:04:30.106 "bdev_virtio_attach_controller", 00:04:30.106 "bdev_virtio_scsi_get_devices", 00:04:30.106 "bdev_virtio_detach_controller", 00:04:30.106 "bdev_virtio_blk_set_hotplug", 00:04:30.106 "bdev_iscsi_delete", 00:04:30.106 "bdev_iscsi_create", 00:04:30.106 "bdev_iscsi_set_options", 00:04:30.106 "accel_error_inject_error", 00:04:30.106 "ioat_scan_accel_module", 00:04:30.106 "dsa_scan_accel_module", 00:04:30.106 "iaa_scan_accel_module", 00:04:30.106 "vfu_virtio_create_scsi_endpoint", 00:04:30.106 "vfu_virtio_scsi_remove_target", 00:04:30.106 "vfu_virtio_scsi_add_target", 00:04:30.106 "vfu_virtio_create_blk_endpoint", 00:04:30.106 "vfu_virtio_delete_endpoint", 00:04:30.106 "keyring_file_remove_key", 00:04:30.106 "keyring_file_add_key", 00:04:30.106 "iscsi_get_histogram", 00:04:30.106 "iscsi_enable_histogram", 00:04:30.106 "iscsi_set_options", 00:04:30.106 "iscsi_get_auth_groups", 00:04:30.106 "iscsi_auth_group_remove_secret", 00:04:30.106 "iscsi_auth_group_add_secret", 00:04:30.106 "iscsi_delete_auth_group", 00:04:30.106 "iscsi_create_auth_group", 00:04:30.106 "iscsi_set_discovery_auth", 00:04:30.106 "iscsi_get_options", 00:04:30.106 "iscsi_target_node_request_logout", 00:04:30.106 "iscsi_target_node_set_redirect", 00:04:30.106 "iscsi_target_node_set_auth", 00:04:30.106 "iscsi_target_node_add_lun", 00:04:30.106 "iscsi_get_stats", 00:04:30.106 "iscsi_get_connections", 00:04:30.106 "iscsi_portal_group_set_auth", 00:04:30.106 "iscsi_start_portal_group", 00:04:30.106 "iscsi_delete_portal_group", 00:04:30.106 "iscsi_create_portal_group", 00:04:30.106 "iscsi_get_portal_groups", 00:04:30.106 "iscsi_delete_target_node", 00:04:30.106 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.106 "iscsi_target_node_add_pg_ig_maps", 00:04:30.106 "iscsi_create_target_node", 00:04:30.106 "iscsi_get_target_nodes", 00:04:30.106 "iscsi_delete_initiator_group", 00:04:30.106 "iscsi_initiator_group_remove_initiators", 00:04:30.106 "iscsi_initiator_group_add_initiators", 00:04:30.106 "iscsi_create_initiator_group", 00:04:30.106 "iscsi_get_initiator_groups", 00:04:30.106 "nvmf_set_crdt", 00:04:30.106 "nvmf_set_config", 00:04:30.106 "nvmf_set_max_subsystems", 00:04:30.106 "nvmf_subsystem_get_listeners", 00:04:30.106 "nvmf_subsystem_get_qpairs", 00:04:30.106 "nvmf_subsystem_get_controllers", 00:04:30.106 "nvmf_get_stats", 00:04:30.106 "nvmf_get_transports", 00:04:30.106 "nvmf_create_transport", 00:04:30.106 "nvmf_get_targets", 00:04:30.106 "nvmf_delete_target", 00:04:30.106 "nvmf_create_target", 00:04:30.106 "nvmf_subsystem_allow_any_host", 00:04:30.106 "nvmf_subsystem_remove_host", 00:04:30.106 "nvmf_subsystem_add_host", 00:04:30.106 "nvmf_ns_remove_host", 00:04:30.106 "nvmf_ns_add_host", 00:04:30.106 "nvmf_subsystem_remove_ns", 00:04:30.106 "nvmf_subsystem_add_ns", 00:04:30.106 "nvmf_subsystem_listener_set_ana_state", 00:04:30.106 "nvmf_discovery_get_referrals", 00:04:30.106 "nvmf_discovery_remove_referral", 00:04:30.106 "nvmf_discovery_add_referral", 00:04:30.106 "nvmf_subsystem_remove_listener", 00:04:30.106 "nvmf_subsystem_add_listener", 00:04:30.106 "nvmf_delete_subsystem", 00:04:30.106 "nvmf_create_subsystem", 00:04:30.106 "nvmf_get_subsystems", 00:04:30.106 "env_dpdk_get_mem_stats", 00:04:30.106 "nbd_get_disks", 00:04:30.106 "nbd_stop_disk", 00:04:30.106 "nbd_start_disk", 00:04:30.106 "ublk_recover_disk", 00:04:30.106 "ublk_get_disks", 00:04:30.106 "ublk_stop_disk", 00:04:30.106 "ublk_start_disk", 00:04:30.106 "ublk_destroy_target", 00:04:30.106 "ublk_create_target", 00:04:30.106 "virtio_blk_create_transport", 00:04:30.106 "virtio_blk_get_transports", 00:04:30.106 "vhost_controller_set_coalescing", 00:04:30.106 "vhost_get_controllers", 00:04:30.106 "vhost_delete_controller", 00:04:30.106 "vhost_create_blk_controller", 00:04:30.106 "vhost_scsi_controller_remove_target", 00:04:30.106 "vhost_scsi_controller_add_target", 00:04:30.106 "vhost_start_scsi_controller", 00:04:30.106 "vhost_create_scsi_controller", 00:04:30.106 "thread_set_cpumask", 00:04:30.106 "framework_get_scheduler", 00:04:30.106 "framework_set_scheduler", 00:04:30.106 "framework_get_reactors", 00:04:30.106 "thread_get_io_channels", 00:04:30.106 "thread_get_pollers", 00:04:30.106 "thread_get_stats", 00:04:30.106 "framework_monitor_context_switch", 00:04:30.106 "spdk_kill_instance", 00:04:30.106 "log_enable_timestamps", 00:04:30.106 "log_get_flags", 00:04:30.106 "log_clear_flag", 00:04:30.106 "log_set_flag", 00:04:30.106 "log_get_level", 00:04:30.106 "log_set_level", 00:04:30.106 "log_get_print_level", 00:04:30.106 "log_set_print_level", 00:04:30.106 "framework_enable_cpumask_locks", 00:04:30.106 "framework_disable_cpumask_locks", 00:04:30.106 "framework_wait_init", 00:04:30.106 "framework_start_init", 00:04:30.106 "scsi_get_devices", 00:04:30.106 "bdev_get_histogram", 00:04:30.106 "bdev_enable_histogram", 00:04:30.106 "bdev_set_qos_limit", 00:04:30.106 "bdev_set_qd_sampling_period", 00:04:30.106 "bdev_get_bdevs", 00:04:30.106 "bdev_reset_iostat", 00:04:30.106 "bdev_get_iostat", 00:04:30.106 "bdev_examine", 00:04:30.106 "bdev_wait_for_examine", 00:04:30.106 "bdev_set_options", 00:04:30.106 "notify_get_notifications", 00:04:30.106 "notify_get_types", 00:04:30.106 "accel_get_stats", 00:04:30.106 "accel_set_options", 00:04:30.106 "accel_set_driver", 00:04:30.106 "accel_crypto_key_destroy", 00:04:30.106 "accel_crypto_keys_get", 00:04:30.106 "accel_crypto_key_create", 00:04:30.106 "accel_assign_opc", 00:04:30.106 "accel_get_module_info", 00:04:30.106 "accel_get_opc_assignments", 00:04:30.106 "vmd_rescan", 00:04:30.106 "vmd_remove_device", 00:04:30.106 "vmd_enable", 00:04:30.106 "sock_get_default_impl", 00:04:30.106 "sock_set_default_impl", 00:04:30.106 "sock_impl_set_options", 00:04:30.106 "sock_impl_get_options", 00:04:30.106 "iobuf_get_stats", 00:04:30.106 "iobuf_set_options", 00:04:30.106 "keyring_get_keys", 00:04:30.106 "framework_get_pci_devices", 00:04:30.106 "framework_get_config", 00:04:30.106 "framework_get_subsystems", 00:04:30.106 "vfu_tgt_set_base_path", 00:04:30.106 "trace_get_info", 00:04:30.106 "trace_get_tpoint_group_mask", 00:04:30.106 "trace_disable_tpoint_group", 00:04:30.106 "trace_enable_tpoint_group", 00:04:30.106 "trace_clear_tpoint_mask", 00:04:30.106 "trace_set_tpoint_mask", 00:04:30.106 "spdk_get_version", 00:04:30.106 "rpc_get_methods" 00:04:30.106 ] 00:04:30.106 02:59:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.106 02:59:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.106 02:59:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 861834 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 861834 ']' 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 861834 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 861834 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 861834' 00:04:30.106 killing process with pid 861834 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 861834 00:04:30.106 02:59:01 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 861834 00:04:30.366 00:04:30.366 real 0m1.486s 00:04:30.366 user 0m2.716s 00:04:30.366 sys 0m0.410s 00:04:30.366 02:59:01 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:30.366 02:59:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.366 ************************************ 00:04:30.366 END TEST spdkcli_tcp 00:04:30.366 ************************************ 00:04:30.366 02:59:01 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.366 02:59:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:30.366 02:59:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.366 02:59:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.366 ************************************ 00:04:30.366 START TEST dpdk_mem_utility 00:04:30.366 ************************************ 00:04:30.366 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.625 * Looking for test storage... 00:04:30.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:30.625 02:59:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:30.625 02:59:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=862202 00:04:30.625 02:59:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.625 02:59:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 862202 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 862202 ']' 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:30.625 02:59:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.625 [2024-05-15 02:59:01.648682] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:30.625 [2024-05-15 02:59:01.648732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862202 ] 00:04:30.625 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.625 [2024-05-15 02:59:01.701267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.626 [2024-05-15 02:59:01.781721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.563 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:31.563 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:31.563 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.563 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.563 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.563 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.563 { 00:04:31.563 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.563 } 00:04:31.563 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.563 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.563 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:31.563 1 heaps totaling size 814.000000 MiB 00:04:31.563 size: 814.000000 MiB heap id: 0 00:04:31.563 end heaps---------- 00:04:31.563 8 mempools totaling size 598.116089 MiB 00:04:31.563 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.563 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.563 size: 84.521057 MiB name: bdev_io_862202 00:04:31.563 size: 51.011292 MiB name: evtpool_862202 00:04:31.563 size: 50.003479 MiB name: msgpool_862202 00:04:31.563 size: 21.763794 MiB name: PDU_Pool 00:04:31.563 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.563 size: 0.026123 MiB name: Session_Pool 00:04:31.563 end mempools------- 00:04:31.563 6 memzones totaling size 4.142822 MiB 00:04:31.563 size: 1.000366 MiB name: RG_ring_0_862202 00:04:31.563 size: 1.000366 MiB name: RG_ring_1_862202 00:04:31.563 size: 1.000366 MiB name: RG_ring_4_862202 00:04:31.563 size: 1.000366 MiB name: RG_ring_5_862202 00:04:31.563 size: 0.125366 MiB name: RG_ring_2_862202 00:04:31.563 size: 0.015991 MiB name: RG_ring_3_862202 00:04:31.563 end memzones------- 00:04:31.564 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.564 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:31.564 list of free elements. size: 12.519348 MiB 00:04:31.564 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:31.564 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:31.564 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:31.564 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:31.564 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:31.564 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:31.564 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:31.564 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:31.564 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:31.564 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:31.564 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:31.564 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:31.564 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:31.564 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:31.564 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:31.564 list of standard malloc elements. size: 199.218079 MiB 00:04:31.564 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:31.564 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:31.564 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.564 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:31.564 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:31.564 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.564 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:31.564 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.564 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:31.564 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:31.564 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:31.564 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:31.564 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:31.564 list of memzone associated elements. size: 602.262573 MiB 00:04:31.564 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:31.564 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.564 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:31.564 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.564 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:31.564 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_862202_0 00:04:31.564 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:31.564 associated memzone info: size: 48.002930 MiB name: MP_evtpool_862202_0 00:04:31.564 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:31.564 associated memzone info: size: 48.002930 MiB name: MP_msgpool_862202_0 00:04:31.564 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:31.564 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.564 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:31.564 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.564 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:31.564 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_862202 00:04:31.564 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:31.564 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_862202 00:04:31.564 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.564 associated memzone info: size: 1.007996 MiB name: MP_evtpool_862202 00:04:31.564 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:31.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.564 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:31.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.564 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:31.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.564 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:31.564 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.564 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:31.564 associated memzone info: size: 1.000366 MiB name: RG_ring_0_862202 00:04:31.564 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:31.564 associated memzone info: size: 1.000366 MiB name: RG_ring_1_862202 00:04:31.564 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:31.564 associated memzone info: size: 1.000366 MiB name: RG_ring_4_862202 00:04:31.564 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:31.564 associated memzone info: size: 1.000366 MiB name: RG_ring_5_862202 00:04:31.564 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:31.564 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_862202 00:04:31.564 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:31.564 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.564 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:31.564 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.564 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:31.564 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.564 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:31.564 associated memzone info: size: 0.125366 MiB name: RG_ring_2_862202 00:04:31.564 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:31.564 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.564 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:31.564 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.564 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:31.564 associated memzone info: size: 0.015991 MiB name: RG_ring_3_862202 00:04:31.564 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:31.564 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.564 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:31.564 associated memzone info: size: 0.000183 MiB name: MP_msgpool_862202 00:04:31.564 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:31.564 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_862202 00:04:31.564 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:31.564 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.564 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.564 02:59:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 862202 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 862202 ']' 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 862202 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 862202 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 862202' 00:04:31.564 killing process with pid 862202 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 862202 00:04:31.564 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 862202 00:04:31.824 00:04:31.824 real 0m1.383s 00:04:31.824 user 0m1.457s 00:04:31.824 sys 0m0.367s 00:04:31.824 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.824 02:59:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.824 ************************************ 00:04:31.824 END TEST dpdk_mem_utility 00:04:31.824 ************************************ 00:04:31.824 02:59:02 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:31.824 02:59:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.824 02:59:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.824 02:59:02 -- common/autotest_common.sh@10 -- # set +x 00:04:31.824 ************************************ 00:04:31.824 START TEST event 00:04:31.824 ************************************ 00:04:31.824 02:59:02 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.083 * Looking for test storage... 00:04:32.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:32.083 02:59:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:32.083 02:59:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.083 02:59:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.083 02:59:03 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:32.083 02:59:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.083 02:59:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.083 ************************************ 00:04:32.083 START TEST event_perf 00:04:32.083 ************************************ 00:04:32.083 02:59:03 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.083 Running I/O for 1 seconds...[2024-05-15 02:59:03.119688] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:32.083 [2024-05-15 02:59:03.119739] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862489 ] 00:04:32.083 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.083 [2024-05-15 02:59:03.168448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.083 [2024-05-15 02:59:03.242600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.083 [2024-05-15 02:59:03.242698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.083 [2024-05-15 02:59:03.242784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.083 [2024-05-15 02:59:03.242785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.463 Running I/O for 1 seconds... 00:04:33.463 lcore 0: 199964 00:04:33.463 lcore 1: 199964 00:04:33.463 lcore 2: 199964 00:04:33.463 lcore 3: 199964 00:04:33.463 done. 00:04:33.463 00:04:33.463 real 0m1.224s 00:04:33.463 user 0m4.155s 00:04:33.463 sys 0m0.065s 00:04:33.463 02:59:04 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.463 02:59:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.463 ************************************ 00:04:33.463 END TEST event_perf 00:04:33.463 ************************************ 00:04:33.463 02:59:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.463 02:59:04 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:33.463 02:59:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.463 02:59:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.463 ************************************ 00:04:33.463 START TEST event_reactor 00:04:33.463 ************************************ 00:04:33.463 02:59:04 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.463 [2024-05-15 02:59:04.425500] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:33.463 [2024-05-15 02:59:04.425579] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862745 ] 00:04:33.463 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.463 [2024-05-15 02:59:04.481447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.463 [2024-05-15 02:59:04.554025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.843 test_start 00:04:34.843 oneshot 00:04:34.843 tick 100 00:04:34.843 tick 100 00:04:34.843 tick 250 00:04:34.843 tick 100 00:04:34.843 tick 100 00:04:34.843 tick 100 00:04:34.843 tick 250 00:04:34.843 tick 500 00:04:34.843 tick 100 00:04:34.843 tick 100 00:04:34.843 tick 250 00:04:34.843 tick 100 00:04:34.843 tick 100 00:04:34.843 test_end 00:04:34.843 00:04:34.843 real 0m1.235s 00:04:34.843 user 0m1.160s 00:04:34.843 sys 0m0.071s 00:04:34.843 02:59:05 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.843 02:59:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:34.843 ************************************ 00:04:34.843 END TEST event_reactor 00:04:34.843 ************************************ 00:04:34.843 02:59:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.843 02:59:05 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:34.843 02:59:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.843 02:59:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.843 ************************************ 00:04:34.843 START TEST event_reactor_perf 00:04:34.843 ************************************ 00:04:34.843 02:59:05 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.843 [2024-05-15 02:59:05.719556] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:34.843 [2024-05-15 02:59:05.719596] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862993 ] 00:04:34.843 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.844 [2024-05-15 02:59:05.772354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.844 [2024-05-15 02:59:05.843439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.821 test_start 00:04:35.821 test_end 00:04:35.821 Performance: 502621 events per second 00:04:35.821 00:04:35.821 real 0m1.223s 00:04:35.821 user 0m1.149s 00:04:35.821 sys 0m0.071s 00:04:35.821 02:59:06 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.821 02:59:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.821 ************************************ 00:04:35.821 END TEST event_reactor_perf 00:04:35.821 ************************************ 00:04:35.821 02:59:06 event -- event/event.sh@49 -- # uname -s 00:04:35.821 02:59:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:35.821 02:59:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:35.821 02:59:06 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.821 02:59:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.821 02:59:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.081 ************************************ 00:04:36.081 START TEST event_scheduler 00:04:36.081 ************************************ 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.081 * Looking for test storage... 00:04:36.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:36.081 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.081 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.081 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=863272 00:04:36.081 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.081 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 863272 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 863272 ']' 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:36.081 02:59:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.081 [2024-05-15 02:59:07.117909] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:36.081 [2024-05-15 02:59:07.117954] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863272 ] 00:04:36.081 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.081 [2024-05-15 02:59:07.169695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.339 [2024-05-15 02:59:07.245554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.339 [2024-05-15 02:59:07.245654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.339 [2024-05-15 02:59:07.245740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.339 [2024-05-15 02:59:07.245742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:36.963 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 POWER: Env isn't set yet! 00:04:36.963 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:36.963 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:36.963 POWER: Cannot set governor of lcore 0 to userspace 00:04:36.963 POWER: Attempting to initialise PSTAT power management... 00:04:36.963 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:36.963 POWER: Initialized successfully for lcore 0 power management 00:04:36.963 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:36.963 POWER: Initialized successfully for lcore 1 power management 00:04:36.963 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:36.963 POWER: Initialized successfully for lcore 2 power management 00:04:36.963 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:36.963 POWER: Initialized successfully for lcore 3 power management 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.963 02:59:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 [2024-05-15 02:59:08.038804] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:36.963 02:59:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.963 02:59:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:36.963 02:59:08 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.963 02:59:08 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.963 02:59:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 ************************************ 00:04:36.963 START TEST scheduler_create_thread 00:04:36.963 ************************************ 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 2 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 3 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.963 4 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.963 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 5 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 6 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 7 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 8 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 9 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.221 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 10 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.222 02:59:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.156 02:59:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.156 02:59:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:38.156 02:59:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.157 02:59:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.535 02:59:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.535 02:59:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:39.535 02:59:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:39.535 02:59:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.535 02:59:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.474 02:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.474 00:04:40.474 real 0m3.383s 00:04:40.474 user 0m0.026s 00:04:40.474 sys 0m0.002s 00:04:40.474 02:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.474 02:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.474 ************************************ 00:04:40.474 END TEST scheduler_create_thread 00:04:40.474 ************************************ 00:04:40.474 02:59:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.474 02:59:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 863272 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 863272 ']' 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 863272 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 863272 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 863272' 00:04:40.474 killing process with pid 863272 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 863272 00:04:40.474 02:59:11 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 863272 00:04:40.733 [2024-05-15 02:59:11.839080] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:40.992 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:40.992 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:40.992 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:40.992 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:40.992 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:40.992 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:40.992 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:40.992 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:40.992 00:04:40.992 real 0m5.088s 00:04:40.992 user 0m10.545s 00:04:40.992 sys 0m0.341s 00:04:40.992 02:59:12 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.992 02:59:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.992 ************************************ 00:04:40.992 END TEST event_scheduler 00:04:40.992 ************************************ 00:04:40.992 02:59:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:40.992 02:59:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:40.992 02:59:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.992 02:59:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.992 02:59:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.251 ************************************ 00:04:41.251 START TEST app_repeat 00:04:41.251 ************************************ 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=864052 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 864052' 00:04:41.251 Process app_repeat pid: 864052 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.251 spdk_app_start Round 0 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 864052 /var/tmp/spdk-nbd.sock 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 864052 ']' 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.251 [2024-05-15 02:59:12.172356] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:41.251 [2024-05-15 02:59:12.172393] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864052 ] 00:04:41.251 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.251 [2024-05-15 02:59:12.225969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.251 [2024-05-15 02:59:12.306357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.251 [2024-05-15 02:59:12.306360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.251 02:59:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:41.251 02:59:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.510 Malloc0 00:04:41.510 02:59:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.768 Malloc1 00:04:41.768 02:59:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.768 02:59:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.768 02:59:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.769 02:59:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.028 /dev/nbd0 00:04:42.028 02:59:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.028 02:59:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.028 1+0 records in 00:04:42.028 1+0 records out 00:04:42.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225136 s, 18.2 MB/s 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:42.028 02:59:12 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:42.028 02:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.028 02:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.028 02:59:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.028 /dev/nbd1 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.028 1+0 records in 00:04:42.028 1+0 records out 00:04:42.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179469 s, 22.8 MB/s 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:42.028 02:59:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.028 02:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.288 { 00:04:42.288 "nbd_device": "/dev/nbd0", 00:04:42.288 "bdev_name": "Malloc0" 00:04:42.288 }, 00:04:42.288 { 00:04:42.288 "nbd_device": "/dev/nbd1", 00:04:42.288 "bdev_name": "Malloc1" 00:04:42.288 } 00:04:42.288 ]' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.288 { 00:04:42.288 "nbd_device": "/dev/nbd0", 00:04:42.288 "bdev_name": "Malloc0" 00:04:42.288 }, 00:04:42.288 { 00:04:42.288 "nbd_device": "/dev/nbd1", 00:04:42.288 "bdev_name": "Malloc1" 00:04:42.288 } 00:04:42.288 ]' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.288 /dev/nbd1' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.288 /dev/nbd1' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.288 256+0 records in 00:04:42.288 256+0 records out 00:04:42.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103468 s, 101 MB/s 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.288 256+0 records in 00:04:42.288 256+0 records out 00:04:42.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138421 s, 75.8 MB/s 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.288 256+0 records in 00:04:42.288 256+0 records out 00:04:42.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144795 s, 72.4 MB/s 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.288 02:59:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.547 02:59:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.806 02:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.107 02:59:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.107 02:59:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.398 02:59:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.398 [2024-05-15 02:59:14.478789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.398 [2024-05-15 02:59:14.545436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.398 [2024-05-15 02:59:14.545439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.658 [2024-05-15 02:59:14.586936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.658 [2024-05-15 02:59:14.586977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.193 02:59:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.193 02:59:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:46.193 spdk_app_start Round 1 00:04:46.193 02:59:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 864052 /var/tmp/spdk-nbd.sock 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 864052 ']' 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.193 02:59:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.451 02:59:17 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.451 02:59:17 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:46.451 02:59:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.710 Malloc0 00:04:46.710 02:59:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.710 Malloc1 00:04:46.710 02:59:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.710 02:59:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.969 /dev/nbd0 00:04:46.969 02:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.969 02:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.969 02:59:18 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:46.969 02:59:18 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:46.969 02:59:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:46.969 02:59:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:46.969 02:59:18 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.970 1+0 records in 00:04:46.970 1+0 records out 00:04:46.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181394 s, 22.6 MB/s 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:46.970 02:59:18 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:46.970 02:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.970 02:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.970 02:59:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.230 /dev/nbd1 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.230 1+0 records in 00:04:47.230 1+0 records out 00:04:47.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186433 s, 22.0 MB/s 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:47.230 02:59:18 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.230 02:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.490 { 00:04:47.490 "nbd_device": "/dev/nbd0", 00:04:47.490 "bdev_name": "Malloc0" 00:04:47.490 }, 00:04:47.490 { 00:04:47.490 "nbd_device": "/dev/nbd1", 00:04:47.490 "bdev_name": "Malloc1" 00:04:47.490 } 00:04:47.490 ]' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.490 { 00:04:47.490 "nbd_device": "/dev/nbd0", 00:04:47.490 "bdev_name": "Malloc0" 00:04:47.490 }, 00:04:47.490 { 00:04:47.490 "nbd_device": "/dev/nbd1", 00:04:47.490 "bdev_name": "Malloc1" 00:04:47.490 } 00:04:47.490 ]' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.490 /dev/nbd1' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.490 /dev/nbd1' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.490 256+0 records in 00:04:47.490 256+0 records out 00:04:47.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103336 s, 101 MB/s 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.490 256+0 records in 00:04:47.490 256+0 records out 00:04:47.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139289 s, 75.3 MB/s 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.490 256+0 records in 00:04:47.490 256+0 records out 00:04:47.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015438 s, 67.9 MB/s 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.490 02:59:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.491 02:59:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.750 02:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.009 02:59:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.009 02:59:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.268 02:59:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.526 [2024-05-15 02:59:19.535122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.526 [2024-05-15 02:59:19.601492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.526 [2024-05-15 02:59:19.601510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.526 [2024-05-15 02:59:19.643619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.526 [2024-05-15 02:59:19.643662] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.816 spdk_app_start Round 2 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 864052 /var/tmp/spdk-nbd.sock 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 864052 ']' 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.816 02:59:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.816 Malloc0 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.816 Malloc1 00:04:51.816 02:59:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.816 02:59:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.817 02:59:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.817 02:59:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.817 02:59:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.817 02:59:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.817 02:59:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.075 /dev/nbd0 00:04:52.075 02:59:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.075 02:59:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.075 1+0 records in 00:04:52.075 1+0 records out 00:04:52.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181546 s, 22.6 MB/s 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:52.075 02:59:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:52.076 02:59:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.076 02:59:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.076 02:59:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.334 /dev/nbd1 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.334 1+0 records in 00:04:52.334 1+0 records out 00:04:52.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198938 s, 20.6 MB/s 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:52.334 02:59:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.334 { 00:04:52.334 "nbd_device": "/dev/nbd0", 00:04:52.334 "bdev_name": "Malloc0" 00:04:52.334 }, 00:04:52.334 { 00:04:52.334 "nbd_device": "/dev/nbd1", 00:04:52.334 "bdev_name": "Malloc1" 00:04:52.334 } 00:04:52.334 ]' 00:04:52.334 02:59:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.334 { 00:04:52.334 "nbd_device": "/dev/nbd0", 00:04:52.334 "bdev_name": "Malloc0" 00:04:52.334 }, 00:04:52.335 { 00:04:52.335 "nbd_device": "/dev/nbd1", 00:04:52.335 "bdev_name": "Malloc1" 00:04:52.335 } 00:04:52.335 ]' 00:04:52.335 02:59:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.593 02:59:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.594 /dev/nbd1' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.594 /dev/nbd1' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.594 256+0 records in 00:04:52.594 256+0 records out 00:04:52.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042854 s, 245 MB/s 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.594 256+0 records in 00:04:52.594 256+0 records out 00:04:52.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140012 s, 74.9 MB/s 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.594 256+0 records in 00:04:52.594 256+0 records out 00:04:52.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152403 s, 68.8 MB/s 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.594 02:59:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.852 02:59:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.853 02:59:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.112 02:59:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.112 02:59:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.371 02:59:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.630 [2024-05-15 02:59:24.615250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.630 [2024-05-15 02:59:24.682360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.630 [2024-05-15 02:59:24.682363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.630 [2024-05-15 02:59:24.723941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.630 [2024-05-15 02:59:24.723983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.919 02:59:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 864052 /var/tmp/spdk-nbd.sock 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 864052 ']' 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:56.919 02:59:27 event.app_repeat -- event/event.sh@39 -- # killprocess 864052 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 864052 ']' 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 864052 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 864052 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 864052' 00:04:56.919 killing process with pid 864052 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@965 -- # kill 864052 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@970 -- # wait 864052 00:04:56.919 spdk_app_start is called in Round 0. 00:04:56.919 Shutdown signal received, stop current app iteration 00:04:56.919 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:04:56.919 spdk_app_start is called in Round 1. 00:04:56.919 Shutdown signal received, stop current app iteration 00:04:56.919 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:04:56.919 spdk_app_start is called in Round 2. 00:04:56.919 Shutdown signal received, stop current app iteration 00:04:56.919 Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 reinitialization... 00:04:56.919 spdk_app_start is called in Round 3. 00:04:56.919 Shutdown signal received, stop current app iteration 00:04:56.919 02:59:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:56.919 02:59:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:56.919 00:04:56.919 real 0m15.660s 00:04:56.919 user 0m33.815s 00:04:56.919 sys 0m2.361s 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.919 02:59:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.919 ************************************ 00:04:56.919 END TEST app_repeat 00:04:56.919 ************************************ 00:04:56.919 02:59:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:56.919 02:59:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:56.919 02:59:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.919 02:59:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.919 02:59:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.919 ************************************ 00:04:56.919 START TEST cpu_locks 00:04:56.919 ************************************ 00:04:56.919 02:59:27 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:56.919 * Looking for test storage... 00:04:56.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:56.919 02:59:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:56.919 02:59:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:56.919 02:59:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:56.919 02:59:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:56.919 02:59:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.919 02:59:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.919 02:59:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.919 ************************************ 00:04:56.919 START TEST default_locks 00:04:56.919 ************************************ 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=867006 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 867006 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 867006 ']' 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.919 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.919 [2024-05-15 02:59:28.055200] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:56.919 [2024-05-15 02:59:28.055241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867006 ] 00:04:56.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.178 [2024-05-15 02:59:28.109371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.178 [2024-05-15 02:59:28.188821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.746 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.746 02:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:57.746 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 867006 00:04:57.746 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 867006 00:04:57.746 02:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.322 lslocks: write error 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 867006 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 867006 ']' 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 867006 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 867006 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 867006' 00:04:58.322 killing process with pid 867006 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 867006 00:04:58.322 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 867006 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 867006 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 867006 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.586 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 867006 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 867006 ']' 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (867006) - No such process 00:04:58.587 ERROR: process (pid: 867006) is no longer running 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.587 00:04:58.587 real 0m1.557s 00:04:58.587 user 0m1.635s 00:04:58.587 sys 0m0.481s 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.587 02:59:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.587 ************************************ 00:04:58.587 END TEST default_locks 00:04:58.587 ************************************ 00:04:58.587 02:59:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:58.587 02:59:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.587 02:59:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.587 02:59:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.587 ************************************ 00:04:58.587 START TEST default_locks_via_rpc 00:04:58.587 ************************************ 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=867270 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 867270 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 867270 ']' 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.587 02:59:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.587 [2024-05-15 02:59:29.686045] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:04:58.587 [2024-05-15 02:59:29.686085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867270 ] 00:04:58.587 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.587 [2024-05-15 02:59:29.738035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.845 [2024-05-15 02:59:29.817413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.412 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 867270 00:04:59.413 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 867270 00:04:59.413 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 867270 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 867270 ']' 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 867270 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 867270 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 867270' 00:04:59.982 killing process with pid 867270 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 867270 00:04:59.982 02:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 867270 00:05:00.240 00:05:00.240 real 0m1.644s 00:05:00.240 user 0m1.748s 00:05:00.240 sys 0m0.504s 00:05:00.240 02:59:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.240 02:59:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 ************************************ 00:05:00.240 END TEST default_locks_via_rpc 00:05:00.240 ************************************ 00:05:00.240 02:59:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:00.240 02:59:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.240 02:59:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.240 02:59:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 ************************************ 00:05:00.240 START TEST non_locking_app_on_locked_coremask 00:05:00.240 ************************************ 00:05:00.240 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=867556 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 867556 /var/tmp/spdk.sock 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 867556 ']' 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.241 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.241 [2024-05-15 02:59:31.394254] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:00.241 [2024-05-15 02:59:31.394296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867556 ] 00:05:00.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.499 [2024-05-15 02:59:31.444483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.500 [2024-05-15 02:59:31.516442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=867753 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 867753 /var/tmp/spdk2.sock 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 867753 ']' 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.759 02:59:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.759 [2024-05-15 02:59:31.759642] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:00.759 [2024-05-15 02:59:31.759689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867753 ] 00:05:00.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.759 [2024-05-15 02:59:31.829889] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.759 [2024-05-15 02:59:31.829910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.019 [2024-05-15 02:59:31.980024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.588 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.588 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:01.588 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 867556 00:05:01.588 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 867556 00:05:01.588 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.847 lslocks: write error 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 867556 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 867556 ']' 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 867556 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:01.847 02:59:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 867556 00:05:02.107 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:02.107 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:02.107 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 867556' 00:05:02.107 killing process with pid 867556 00:05:02.107 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 867556 00:05:02.107 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 867556 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 867753 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 867753 ']' 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 867753 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 867753 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 867753' 00:05:02.675 killing process with pid 867753 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 867753 00:05:02.675 02:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 867753 00:05:02.934 00:05:02.934 real 0m2.727s 00:05:02.934 user 0m2.818s 00:05:02.934 sys 0m0.852s 00:05:02.934 02:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.934 02:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.934 ************************************ 00:05:02.934 END TEST non_locking_app_on_locked_coremask 00:05:02.934 ************************************ 00:05:03.193 02:59:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.193 02:59:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.193 02:59:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.193 02:59:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 ************************************ 00:05:03.193 START TEST locking_app_on_unlocked_coremask 00:05:03.193 ************************************ 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=868080 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 868080 /var/tmp/spdk.sock 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 868080 ']' 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:03.193 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 [2024-05-15 02:59:34.181038] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:03.193 [2024-05-15 02:59:34.181080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868080 ] 00:05:03.193 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.193 [2024-05-15 02:59:34.233069] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.193 [2024-05-15 02:59:34.233093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.193 [2024-05-15 02:59:34.312845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=868270 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 868270 /var/tmp/spdk2.sock 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 868270 ']' 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:04.165 02:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.165 [2024-05-15 02:59:35.019763] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:04.165 [2024-05-15 02:59:35.019812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868270 ] 00:05:04.165 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.165 [2024-05-15 02:59:35.095946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.165 [2024-05-15 02:59:35.248361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.734 02:59:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:04.734 02:59:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:04.734 02:59:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 868270 00:05:04.734 02:59:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 868270 00:05:04.734 02:59:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.301 lslocks: write error 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 868080 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 868080 ']' 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 868080 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 868080 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 868080' 00:05:05.301 killing process with pid 868080 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 868080 00:05:05.301 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 868080 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 868270 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 868270 ']' 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 868270 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:05.869 02:59:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 868270 00:05:05.869 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:05.869 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:05.869 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 868270' 00:05:05.869 killing process with pid 868270 00:05:05.869 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 868270 00:05:05.869 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 868270 00:05:06.437 00:05:06.437 real 0m3.224s 00:05:06.437 user 0m3.459s 00:05:06.437 sys 0m0.877s 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.437 ************************************ 00:05:06.437 END TEST locking_app_on_unlocked_coremask 00:05:06.437 ************************************ 00:05:06.437 02:59:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:06.437 02:59:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.437 02:59:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.437 02:59:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.437 ************************************ 00:05:06.437 START TEST locking_app_on_locked_coremask 00:05:06.437 ************************************ 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=868756 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 868756 /var/tmp/spdk.sock 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 868756 ']' 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.437 02:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.437 [2024-05-15 02:59:37.469914] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:06.437 [2024-05-15 02:59:37.469951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868756 ] 00:05:06.437 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.437 [2024-05-15 02:59:37.522802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.696 [2024-05-15 02:59:37.601873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=868821 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 868821 /var/tmp/spdk2.sock 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 868821 /var/tmp/spdk2.sock 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 868821 /var/tmp/spdk2.sock 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 868821 ']' 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.264 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.264 [2024-05-15 02:59:38.323604] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:07.264 [2024-05-15 02:59:38.323652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868821 ] 00:05:07.264 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.264 [2024-05-15 02:59:38.398151] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 868756 has claimed it. 00:05:07.264 [2024-05-15 02:59:38.398185] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:07.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (868821) - No such process 00:05:07.831 ERROR: process (pid: 868821) is no longer running 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 868756 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 868756 00:05:07.831 02:59:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.397 lslocks: write error 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 868756 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 868756 ']' 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 868756 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 868756 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:08.397 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:08.398 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 868756' 00:05:08.398 killing process with pid 868756 00:05:08.398 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 868756 00:05:08.398 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 868756 00:05:08.656 00:05:08.656 real 0m2.230s 00:05:08.656 user 0m2.478s 00:05:08.656 sys 0m0.567s 00:05:08.656 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.656 02:59:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.656 ************************************ 00:05:08.656 END TEST locking_app_on_locked_coremask 00:05:08.656 ************************************ 00:05:08.656 02:59:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:08.656 02:59:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.656 02:59:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.656 02:59:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.656 ************************************ 00:05:08.656 START TEST locking_overlapped_coremask 00:05:08.656 ************************************ 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=869113 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 869113 /var/tmp/spdk.sock 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 869113 ']' 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:08.656 02:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.656 [2024-05-15 02:59:39.760248] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:08.656 [2024-05-15 02:59:39.760287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869113 ] 00:05:08.656 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.656 [2024-05-15 02:59:39.813691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.914 [2024-05-15 02:59:39.895050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.914 [2024-05-15 02:59:39.895147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.914 [2024-05-15 02:59:39.895147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=869262 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 869262 /var/tmp/spdk2.sock 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 869262 /var/tmp/spdk2.sock 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.480 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 869262 /var/tmp/spdk2.sock 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 869262 ']' 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.481 02:59:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 [2024-05-15 02:59:40.620412] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:09.481 [2024-05-15 02:59:40.620459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869262 ] 00:05:09.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.739 [2024-05-15 02:59:40.695441] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 869113 has claimed it. 00:05:09.739 [2024-05-15 02:59:40.695477] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (869262) - No such process 00:05:10.307 ERROR: process (pid: 869262) is no longer running 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 869113 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 869113 ']' 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 869113 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 869113 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 869113' 00:05:10.307 killing process with pid 869113 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 869113 00:05:10.307 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 869113 00:05:10.567 00:05:10.567 real 0m1.918s 00:05:10.567 user 0m5.391s 00:05:10.567 sys 0m0.388s 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.567 ************************************ 00:05:10.567 END TEST locking_overlapped_coremask 00:05:10.567 ************************************ 00:05:10.567 02:59:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:10.567 02:59:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.567 02:59:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.567 02:59:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.567 ************************************ 00:05:10.567 START TEST locking_overlapped_coremask_via_rpc 00:05:10.567 ************************************ 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=869520 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 869520 /var/tmp/spdk.sock 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 869520 ']' 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:10.567 02:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.826 [2024-05-15 02:59:41.753876] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:10.826 [2024-05-15 02:59:41.753936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869520 ] 00:05:10.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.826 [2024-05-15 02:59:41.805802] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.826 [2024-05-15 02:59:41.805826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.826 [2024-05-15 02:59:41.876380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.826 [2024-05-15 02:59:41.876480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.826 [2024-05-15 02:59:41.876481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=869674 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 869674 /var/tmp/spdk2.sock 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 869674 ']' 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.394 02:59:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.653 [2024-05-15 02:59:42.599454] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:11.653 [2024-05-15 02:59:42.599512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869674 ] 00:05:11.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.653 [2024-05-15 02:59:42.676576] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.653 [2024-05-15 02:59:42.676608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.912 [2024-05-15 02:59:42.826919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.912 [2024-05-15 02:59:42.827036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.912 [2024-05-15 02:59:42.827037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.485 [2024-05-15 02:59:43.422530] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 869520 has claimed it. 00:05:12.485 request: 00:05:12.485 { 00:05:12.485 "method": "framework_enable_cpumask_locks", 00:05:12.485 "req_id": 1 00:05:12.485 } 00:05:12.485 Got JSON-RPC error response 00:05:12.485 response: 00:05:12.485 { 00:05:12.485 "code": -32603, 00:05:12.485 "message": "Failed to claim CPU core: 2" 00:05:12.485 } 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 869520 /var/tmp/spdk.sock 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 869520 ']' 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 869674 /var/tmp/spdk2.sock 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 869674 ']' 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.485 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.746 00:05:12.746 real 0m2.103s 00:05:12.746 user 0m0.879s 00:05:12.746 sys 0m0.155s 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.746 02:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.746 ************************************ 00:05:12.746 END TEST locking_overlapped_coremask_via_rpc 00:05:12.746 ************************************ 00:05:12.746 02:59:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.746 02:59:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 869520 ]] 00:05:12.746 02:59:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 869520 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 869520 ']' 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 869520 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 869520 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 869520' 00:05:12.746 killing process with pid 869520 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 869520 00:05:12.746 02:59:43 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 869520 00:05:13.314 02:59:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 869674 ]] 00:05:13.314 02:59:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 869674 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 869674 ']' 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 869674 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 869674 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 869674' 00:05:13.314 killing process with pid 869674 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 869674 00:05:13.314 02:59:44 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 869674 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 869520 ]] 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 869520 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 869520 ']' 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 869520 00:05:13.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (869520) - No such process 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 869520 is not found' 00:05:13.574 Process with pid 869520 is not found 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 869674 ]] 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 869674 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 869674 ']' 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 869674 00:05:13.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (869674) - No such process 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 869674 is not found' 00:05:13.574 Process with pid 869674 is not found 00:05:13.574 02:59:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.574 00:05:13.574 real 0m16.740s 00:05:13.574 user 0m29.116s 00:05:13.574 sys 0m4.706s 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.574 02:59:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.574 ************************************ 00:05:13.574 END TEST cpu_locks 00:05:13.574 ************************************ 00:05:13.574 00:05:13.574 real 0m41.685s 00:05:13.574 user 1m20.141s 00:05:13.574 sys 0m7.945s 00:05:13.574 02:59:44 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.574 02:59:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.574 ************************************ 00:05:13.574 END TEST event 00:05:13.574 ************************************ 00:05:13.574 02:59:44 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.574 02:59:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.574 02:59:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.574 02:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:13.834 ************************************ 00:05:13.834 START TEST thread 00:05:13.834 ************************************ 00:05:13.834 02:59:44 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.834 * Looking for test storage... 00:05:13.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:13.834 02:59:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.834 02:59:44 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:13.834 02:59:44 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.834 02:59:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.834 ************************************ 00:05:13.834 START TEST thread_poller_perf 00:05:13.834 ************************************ 00:05:13.834 02:59:44 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.834 [2024-05-15 02:59:44.847507] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:13.834 [2024-05-15 02:59:44.847556] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870092 ] 00:05:13.834 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.834 [2024-05-15 02:59:44.901073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.834 [2024-05-15 02:59:44.974827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.834 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.216 ====================================== 00:05:15.216 busy:2309674888 (cyc) 00:05:15.216 total_run_count: 405000 00:05:15.216 tsc_hz: 2300000000 (cyc) 00:05:15.216 ====================================== 00:05:15.216 poller_cost: 5702 (cyc), 2479 (nsec) 00:05:15.216 00:05:15.216 real 0m1.230s 00:05:15.216 user 0m1.159s 00:05:15.216 sys 0m0.067s 00:05:15.216 02:59:46 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.216 02:59:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 ************************************ 00:05:15.216 END TEST thread_poller_perf 00:05:15.216 ************************************ 00:05:15.216 02:59:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.216 02:59:46 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:15.216 02:59:46 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.216 02:59:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 ************************************ 00:05:15.216 START TEST thread_poller_perf 00:05:15.216 ************************************ 00:05:15.216 02:59:46 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.216 [2024-05-15 02:59:46.150328] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:15.216 [2024-05-15 02:59:46.150391] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870345 ] 00:05:15.216 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.216 [2024-05-15 02:59:46.207823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.216 [2024-05-15 02:59:46.279126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.216 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.595 ====================================== 00:05:16.595 busy:2301416034 (cyc) 00:05:16.595 total_run_count: 5276000 00:05:16.595 tsc_hz: 2300000000 (cyc) 00:05:16.595 ====================================== 00:05:16.595 poller_cost: 436 (cyc), 189 (nsec) 00:05:16.595 00:05:16.595 real 0m1.237s 00:05:16.595 user 0m1.156s 00:05:16.595 sys 0m0.077s 00:05:16.595 02:59:47 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.595 02:59:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.595 ************************************ 00:05:16.595 END TEST thread_poller_perf 00:05:16.595 ************************************ 00:05:16.595 02:59:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.595 00:05:16.595 real 0m2.663s 00:05:16.595 user 0m2.377s 00:05:16.595 sys 0m0.284s 00:05:16.595 02:59:47 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.595 02:59:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.595 ************************************ 00:05:16.595 END TEST thread 00:05:16.595 ************************************ 00:05:16.595 02:59:47 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.595 02:59:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.595 02:59:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.595 02:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:16.595 ************************************ 00:05:16.595 START TEST accel 00:05:16.595 ************************************ 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.595 * Looking for test storage... 00:05:16.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:16.595 02:59:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:16.595 02:59:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:16.595 02:59:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.595 02:59:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=870633 00:05:16.595 02:59:47 accel -- accel/accel.sh@63 -- # waitforlisten 870633 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@827 -- # '[' -z 870633 ']' 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.595 02:59:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.595 02:59:47 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:16.595 02:59:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:16.595 02:59:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.595 02:59:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.595 02:59:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.595 02:59:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.595 02:59:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.595 02:59:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:16.595 02:59:47 accel -- accel/accel.sh@41 -- # jq -r . 00:05:16.595 [2024-05-15 02:59:47.596277] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:16.595 [2024-05-15 02:59:47.596323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870633 ] 00:05:16.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.595 [2024-05-15 02:59:47.649806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.595 [2024-05-15 02:59:47.729604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.532 02:59:48 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.532 02:59:48 accel -- common/autotest_common.sh@860 -- # return 0 00:05:17.532 02:59:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:17.532 02:59:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:17.532 02:59:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:17.532 02:59:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:17.532 02:59:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:17.532 02:59:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:17.532 02:59:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:17.532 02:59:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.532 02:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.532 02:59:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.532 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.532 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.532 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.532 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.533 02:59:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.533 02:59:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.533 02:59:48 accel -- accel/accel.sh@75 -- # killprocess 870633 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@946 -- # '[' -z 870633 ']' 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@950 -- # kill -0 870633 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@951 -- # uname 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 870633 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 870633' 00:05:17.533 killing process with pid 870633 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@965 -- # kill 870633 00:05:17.533 02:59:48 accel -- common/autotest_common.sh@970 -- # wait 870633 00:05:17.793 02:59:48 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:17.793 02:59:48 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.793 02:59:48 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:17.793 02:59:48 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:17.793 02:59:48 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.793 02:59:48 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:17.793 02:59:48 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.793 02:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.793 ************************************ 00:05:17.793 START TEST accel_missing_filename 00:05:17.793 ************************************ 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.793 02:59:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:17.793 02:59:48 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:18.052 [2024-05-15 02:59:48.957415] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:18.052 [2024-05-15 02:59:48.957463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870899 ] 00:05:18.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.052 [2024-05-15 02:59:49.012054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.052 [2024-05-15 02:59:49.083274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.052 [2024-05-15 02:59:49.124276] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.052 [2024-05-15 02:59:49.184443] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:18.311 A filename is required. 00:05:18.311 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:18.311 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.311 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:18.312 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.312 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:18.312 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.312 00:05:18.312 real 0m0.346s 00:05:18.312 user 0m0.271s 00:05:18.312 sys 0m0.112s 00:05:18.312 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.312 02:59:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:18.312 ************************************ 00:05:18.312 END TEST accel_missing_filename 00:05:18.312 ************************************ 00:05:18.312 02:59:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.312 02:59:49 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:18.312 02:59:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.312 02:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.312 ************************************ 00:05:18.312 START TEST accel_compress_verify 00:05:18.312 ************************************ 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.312 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:18.312 02:59:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:18.312 [2024-05-15 02:59:49.368288] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:18.312 [2024-05-15 02:59:49.368355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871023 ] 00:05:18.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.312 [2024-05-15 02:59:49.425150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.571 [2024-05-15 02:59:49.501785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.571 [2024-05-15 02:59:49.543523] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.571 [2024-05-15 02:59:49.603189] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:18.571 00:05:18.571 Compression does not support the verify option, aborting. 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.571 00:05:18.571 real 0m0.356s 00:05:18.571 user 0m0.287s 00:05:18.571 sys 0m0.108s 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.571 02:59:49 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:18.571 ************************************ 00:05:18.571 END TEST accel_compress_verify 00:05:18.571 ************************************ 00:05:18.571 02:59:49 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:18.571 02:59:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:18.571 02:59:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.571 02:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.830 ************************************ 00:05:18.830 START TEST accel_wrong_workload 00:05:18.830 ************************************ 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.830 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:18.830 02:59:49 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:18.830 Unsupported workload type: foobar 00:05:18.830 [2024-05-15 02:59:49.783502] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:18.830 accel_perf options: 00:05:18.830 [-h help message] 00:05:18.830 [-q queue depth per core] 00:05:18.830 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:18.830 [-T number of threads per core 00:05:18.830 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:18.830 [-t time in seconds] 00:05:18.830 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:18.830 [ dif_verify, , dif_generate, dif_generate_copy 00:05:18.830 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:18.830 [-l for compress/decompress workloads, name of uncompressed input file 00:05:18.830 [-S for crc32c workload, use this seed value (default 0) 00:05:18.831 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:18.831 [-f for fill workload, use this BYTE value (default 255) 00:05:18.831 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:18.831 [-y verify result if this switch is on] 00:05:18.831 [-a tasks to allocate per core (default: same value as -q)] 00:05:18.831 Can be used to spread operations across a wider range of memory. 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.831 00:05:18.831 real 0m0.028s 00:05:18.831 user 0m0.017s 00:05:18.831 sys 0m0.011s 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.831 02:59:49 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:18.831 ************************************ 00:05:18.831 END TEST accel_wrong_workload 00:05:18.831 ************************************ 00:05:18.831 Error: writing output failed: Broken pipe 00:05:18.831 02:59:49 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.831 ************************************ 00:05:18.831 START TEST accel_negative_buffers 00:05:18.831 ************************************ 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:18.831 02:59:49 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:18.831 -x option must be non-negative. 00:05:18.831 [2024-05-15 02:59:49.859076] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:18.831 accel_perf options: 00:05:18.831 [-h help message] 00:05:18.831 [-q queue depth per core] 00:05:18.831 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:18.831 [-T number of threads per core 00:05:18.831 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:18.831 [-t time in seconds] 00:05:18.831 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:18.831 [ dif_verify, , dif_generate, dif_generate_copy 00:05:18.831 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:18.831 [-l for compress/decompress workloads, name of uncompressed input file 00:05:18.831 [-S for crc32c workload, use this seed value (default 0) 00:05:18.831 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:18.831 [-f for fill workload, use this BYTE value (default 255) 00:05:18.831 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:18.831 [-y verify result if this switch is on] 00:05:18.831 [-a tasks to allocate per core (default: same value as -q)] 00:05:18.831 Can be used to spread operations across a wider range of memory. 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.831 00:05:18.831 real 0m0.017s 00:05:18.831 user 0m0.008s 00:05:18.831 sys 0m0.009s 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.831 02:59:49 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:18.831 ************************************ 00:05:18.831 END TEST accel_negative_buffers 00:05:18.831 ************************************ 00:05:18.831 Error: writing output failed: Broken pipe 00:05:18.831 02:59:49 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.831 02:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.831 ************************************ 00:05:18.831 START TEST accel_crc32c 00:05:18.831 ************************************ 00:05:18.831 02:59:49 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:18.831 02:59:49 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:18.831 [2024-05-15 02:59:49.959200] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:18.831 [2024-05-15 02:59:49.959264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871202 ] 00:05:18.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.090 [2024-05-15 02:59:50.018923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.090 [2024-05-15 02:59:50.106196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.090 02:59:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:20.468 02:59:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.468 00:05:20.468 real 0m1.374s 00:05:20.468 user 0m1.256s 00:05:20.468 sys 0m0.120s 00:05:20.468 02:59:51 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.468 02:59:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:20.468 ************************************ 00:05:20.468 END TEST accel_crc32c 00:05:20.468 ************************************ 00:05:20.468 02:59:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:20.468 02:59:51 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:20.468 02:59:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.468 02:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.468 ************************************ 00:05:20.468 START TEST accel_crc32c_C2 00:05:20.468 ************************************ 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:20.468 [2024-05-15 02:59:51.396102] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:20.468 [2024-05-15 02:59:51.396149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871458 ] 00:05:20.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.468 [2024-05-15 02:59:51.450676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.468 [2024-05-15 02:59:51.521667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.468 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.469 02:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.894 00:05:21.894 real 0m1.349s 00:05:21.894 user 0m1.233s 00:05:21.894 sys 0m0.119s 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.894 02:59:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:21.894 ************************************ 00:05:21.894 END TEST accel_crc32c_C2 00:05:21.894 ************************************ 00:05:21.894 02:59:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:21.894 02:59:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:21.894 02:59:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.894 02:59:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.894 ************************************ 00:05:21.894 START TEST accel_copy 00:05:21.894 ************************************ 00:05:21.894 02:59:52 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:21.894 02:59:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:21.894 [2024-05-15 02:59:52.808390] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:21.894 [2024-05-15 02:59:52.808454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871703 ] 00:05:21.894 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.894 [2024-05-15 02:59:52.864182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.895 [2024-05-15 02:59:52.936263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.895 02:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:23.282 02:59:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.282 00:05:23.282 real 0m1.353s 00:05:23.282 user 0m1.244s 00:05:23.282 sys 0m0.111s 00:05:23.282 02:59:54 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.282 02:59:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 ************************************ 00:05:23.282 END TEST accel_copy 00:05:23.282 ************************************ 00:05:23.282 02:59:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.282 02:59:54 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:23.282 02:59:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.282 02:59:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 ************************************ 00:05:23.282 START TEST accel_fill 00:05:23.282 ************************************ 00:05:23.282 02:59:54 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:23.282 [2024-05-15 02:59:54.216342] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:23.282 [2024-05-15 02:59:54.216404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871952 ] 00:05:23.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.282 [2024-05-15 02:59:54.270241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.282 [2024-05-15 02:59:54.342063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.282 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.283 02:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:24.659 02:59:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.659 00:05:24.659 real 0m1.344s 00:05:24.659 user 0m1.234s 00:05:24.659 sys 0m0.112s 00:05:24.659 02:59:55 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.659 02:59:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 ************************************ 00:05:24.659 END TEST accel_fill 00:05:24.659 ************************************ 00:05:24.659 02:59:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:24.659 02:59:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:24.659 02:59:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.659 02:59:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.659 ************************************ 00:05:24.659 START TEST accel_copy_crc32c 00:05:24.659 ************************************ 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:24.659 [2024-05-15 02:59:55.627972] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:24.659 [2024-05-15 02:59:55.628035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872206 ] 00:05:24.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.659 [2024-05-15 02:59:55.684074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.659 [2024-05-15 02:59:55.755147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.659 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.660 02:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.036 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.036 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.036 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.036 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.037 00:05:26.037 real 0m1.353s 00:05:26.037 user 0m1.241s 00:05:26.037 sys 0m0.113s 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.037 02:59:56 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:26.037 ************************************ 00:05:26.037 END TEST accel_copy_crc32c 00:05:26.037 ************************************ 00:05:26.037 02:59:56 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.037 02:59:56 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:26.037 02:59:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.037 02:59:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.037 ************************************ 00:05:26.037 START TEST accel_copy_crc32c_C2 00:05:26.037 ************************************ 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.037 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:26.037 [2024-05-15 02:59:57.043819] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:26.037 [2024-05-15 02:59:57.043880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872453 ] 00:05:26.037 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.037 [2024-05-15 02:59:57.099583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.037 [2024-05-15 02:59:57.170122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.296 02:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.233 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.234 00:05:27.234 real 0m1.350s 00:05:27.234 user 0m1.246s 00:05:27.234 sys 0m0.107s 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.234 02:59:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:27.234 ************************************ 00:05:27.234 END TEST accel_copy_crc32c_C2 00:05:27.234 ************************************ 00:05:27.493 02:59:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:27.493 02:59:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:27.493 02:59:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.493 02:59:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.493 ************************************ 00:05:27.493 START TEST accel_dualcast 00:05:27.493 ************************************ 00:05:27.493 02:59:58 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:27.493 [2024-05-15 02:59:58.454570] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:27.493 [2024-05-15 02:59:58.454617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872708 ] 00:05:27.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.493 [2024-05-15 02:59:58.508171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.493 [2024-05-15 02:59:58.579939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.493 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.494 02:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.870 02:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.871 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.871 02:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.871 02:59:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.871 02:59:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:28.871 02:59:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.871 00:05:28.871 real 0m1.347s 00:05:28.871 user 0m1.242s 00:05:28.871 sys 0m0.106s 00:05:28.871 02:59:59 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.871 02:59:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:28.871 ************************************ 00:05:28.871 END TEST accel_dualcast 00:05:28.871 ************************************ 00:05:28.871 02:59:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:28.871 02:59:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:28.871 02:59:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.871 02:59:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.871 ************************************ 00:05:28.871 START TEST accel_compare 00:05:28.871 ************************************ 00:05:28.871 02:59:59 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:28.871 02:59:59 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:28.871 [2024-05-15 02:59:59.863906] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:28.871 [2024-05-15 02:59:59.863966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872960 ] 00:05:28.871 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.871 [2024-05-15 02:59:59.920116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.871 [2024-05-15 02:59:59.991899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.130 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.131 03:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.066 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:30.067 03:00:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.067 00:05:30.067 real 0m1.356s 00:05:30.067 user 0m1.239s 00:05:30.067 sys 0m0.117s 00:05:30.067 03:00:01 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.067 03:00:01 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:30.067 ************************************ 00:05:30.067 END TEST accel_compare 00:05:30.067 ************************************ 00:05:30.067 03:00:01 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:30.067 03:00:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:30.067 03:00:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.067 03:00:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.342 ************************************ 00:05:30.342 START TEST accel_xor 00:05:30.342 ************************************ 00:05:30.342 03:00:01 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:30.342 [2024-05-15 03:00:01.280537] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:30.342 [2024-05-15 03:00:01.280598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873294 ] 00:05:30.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.342 [2024-05-15 03:00:01.336806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.342 [2024-05-15 03:00:01.410762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.342 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.343 03:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:31.717 03:00:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.717 00:05:31.717 real 0m1.356s 00:05:31.717 user 0m1.243s 00:05:31.717 sys 0m0.114s 00:05:31.717 03:00:02 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.717 03:00:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:31.717 ************************************ 00:05:31.717 END TEST accel_xor 00:05:31.717 ************************************ 00:05:31.717 03:00:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:31.717 03:00:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:31.718 03:00:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.718 03:00:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.718 ************************************ 00:05:31.718 START TEST accel_xor 00:05:31.718 ************************************ 00:05:31.718 03:00:02 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:31.718 [2024-05-15 03:00:02.692315] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:31.718 [2024-05-15 03:00:02.692362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873584 ] 00:05:31.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.718 [2024-05-15 03:00:02.745663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.718 [2024-05-15 03:00:02.818816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.718 03:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.093 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:33.094 03:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.094 00:05:33.094 real 0m1.353s 00:05:33.094 user 0m1.246s 00:05:33.094 sys 0m0.110s 00:05:33.094 03:00:04 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.094 03:00:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 ************************************ 00:05:33.094 END TEST accel_xor 00:05:33.094 ************************************ 00:05:33.094 03:00:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:33.094 03:00:04 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:33.094 03:00:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.094 03:00:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 ************************************ 00:05:33.094 START TEST accel_dif_verify 00:05:33.094 ************************************ 00:05:33.094 03:00:04 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.094 03:00:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.094 [2024-05-15 03:00:04.110739] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:33.094 [2024-05-15 03:00:04.110799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873832 ] 00:05:33.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.094 [2024-05-15 03:00:04.168438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.094 [2024-05-15 03:00:04.241043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 03:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.290 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:34.291 03:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.291 00:05:34.291 real 0m1.357s 00:05:34.291 user 0m1.249s 00:05:34.291 sys 0m0.113s 00:05:34.291 03:00:05 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.291 03:00:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:34.291 ************************************ 00:05:34.291 END TEST accel_dif_verify 00:05:34.291 ************************************ 00:05:34.549 03:00:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:34.549 03:00:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:34.549 03:00:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.549 03:00:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.549 ************************************ 00:05:34.549 START TEST accel_dif_generate 00:05:34.549 ************************************ 00:05:34.549 03:00:05 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:34.549 03:00:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:34.550 [2024-05-15 03:00:05.533705] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:34.550 [2024-05-15 03:00:05.533754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874253 ] 00:05:34.550 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.550 [2024-05-15 03:00:05.588327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.550 [2024-05-15 03:00:05.663692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.550 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.809 03:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:35.743 03:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.743 00:05:35.743 real 0m1.354s 00:05:35.743 user 0m1.246s 00:05:35.743 sys 0m0.113s 00:05:35.743 03:00:06 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.743 03:00:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:35.743 ************************************ 00:05:35.743 END TEST accel_dif_generate 00:05:35.743 ************************************ 00:05:35.743 03:00:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:35.743 03:00:06 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:35.743 03:00:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.743 03:00:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.001 ************************************ 00:05:36.001 START TEST accel_dif_generate_copy 00:05:36.002 ************************************ 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:36.002 03:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:36.002 [2024-05-15 03:00:06.953029] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:36.002 [2024-05-15 03:00:06.953091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874717 ] 00:05:36.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.002 [2024-05-15 03:00:07.009974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.002 [2024-05-15 03:00:07.083734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.002 03:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.379 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.380 00:05:37.380 real 0m1.357s 00:05:37.380 user 0m1.247s 00:05:37.380 sys 0m0.115s 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.380 03:00:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:37.380 ************************************ 00:05:37.380 END TEST accel_dif_generate_copy 00:05:37.380 ************************************ 00:05:37.380 03:00:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:37.380 03:00:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.380 03:00:08 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:37.380 03:00:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.380 03:00:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.380 ************************************ 00:05:37.380 START TEST accel_comp 00:05:37.380 ************************************ 00:05:37.380 03:00:08 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:37.380 03:00:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:37.380 [2024-05-15 03:00:08.374554] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:37.380 [2024-05-15 03:00:08.374602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874978 ] 00:05:37.380 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.380 [2024-05-15 03:00:08.428494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.380 [2024-05-15 03:00:08.500937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.639 03:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:38.574 03:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.574 00:05:38.574 real 0m1.351s 00:05:38.574 user 0m1.248s 00:05:38.574 sys 0m0.108s 00:05:38.574 03:00:09 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.574 03:00:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:38.574 ************************************ 00:05:38.574 END TEST accel_comp 00:05:38.574 ************************************ 00:05:38.574 03:00:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.574 03:00:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:38.574 03:00:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.574 03:00:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.832 ************************************ 00:05:38.832 START TEST accel_decomp 00:05:38.832 ************************************ 00:05:38.832 03:00:09 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:38.832 [2024-05-15 03:00:09.791816] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:38.832 [2024-05-15 03:00:09.791875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875241 ] 00:05:38.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.832 [2024-05-15 03:00:09.849250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.832 [2024-05-15 03:00:09.922475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.832 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.833 03:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.208 03:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.208 00:05:40.208 real 0m1.361s 00:05:40.208 user 0m1.254s 00:05:40.208 sys 0m0.111s 00:05:40.208 03:00:11 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.208 03:00:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:40.208 ************************************ 00:05:40.208 END TEST accel_decomp 00:05:40.208 ************************************ 00:05:40.208 03:00:11 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.208 03:00:11 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:40.208 03:00:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.208 03:00:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.208 ************************************ 00:05:40.208 START TEST accel_decmop_full 00:05:40.208 ************************************ 00:05:40.208 03:00:11 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.208 03:00:11 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.209 03:00:11 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.209 03:00:11 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.209 03:00:11 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:40.209 03:00:11 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:40.209 [2024-05-15 03:00:11.219446] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:40.209 [2024-05-15 03:00:11.219521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875508 ] 00:05:40.209 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.209 [2024-05-15 03:00:11.275622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.209 [2024-05-15 03:00:11.348759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.466 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:40.467 03:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.442 03:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.442 00:05:41.442 real 0m1.365s 00:05:41.442 user 0m1.253s 00:05:41.442 sys 0m0.117s 00:05:41.442 03:00:12 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.442 03:00:12 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:41.442 ************************************ 00:05:41.442 END TEST accel_decmop_full 00:05:41.442 ************************************ 00:05:41.442 03:00:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.442 03:00:12 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:41.442 03:00:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.442 03:00:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.700 ************************************ 00:05:41.700 START TEST accel_decomp_mcore 00:05:41.701 ************************************ 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:41.701 [2024-05-15 03:00:12.655838] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:41.701 [2024-05-15 03:00:12.655889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875805 ] 00:05:41.701 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.701 [2024-05-15 03:00:12.712984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.701 [2024-05-15 03:00:12.788710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.701 [2024-05-15 03:00:12.788810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.701 [2024-05-15 03:00:12.789048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.701 [2024-05-15 03:00:12.789051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.701 03:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.075 00:05:43.075 real 0m1.377s 00:05:43.075 user 0m4.596s 00:05:43.075 sys 0m0.128s 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.075 03:00:14 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 ************************************ 00:05:43.075 END TEST accel_decomp_mcore 00:05:43.075 ************************************ 00:05:43.075 03:00:14 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.075 03:00:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:43.075 03:00:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.075 03:00:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 ************************************ 00:05:43.075 START TEST accel_decomp_full_mcore 00:05:43.075 ************************************ 00:05:43.075 03:00:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.075 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:43.075 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:43.076 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:43.076 [2024-05-15 03:00:14.103740] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:43.076 [2024-05-15 03:00:14.103807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876096 ] 00:05:43.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.076 [2024-05-15 03:00:14.158975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.076 [2024-05-15 03:00:14.234040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.076 [2024-05-15 03:00:14.234136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.076 [2024-05-15 03:00:14.234156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.076 [2024-05-15 03:00:14.234158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.335 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.336 03:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.714 00:05:44.714 real 0m1.384s 00:05:44.714 user 0m4.635s 00:05:44.714 sys 0m0.126s 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.714 03:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:44.714 ************************************ 00:05:44.714 END TEST accel_decomp_full_mcore 00:05:44.714 ************************************ 00:05:44.714 03:00:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.714 03:00:15 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:44.714 03:00:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.714 03:00:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.714 ************************************ 00:05:44.714 START TEST accel_decomp_mthread 00:05:44.714 ************************************ 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:44.714 [2024-05-15 03:00:15.557281] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:44.714 [2024-05-15 03:00:15.557347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876388 ] 00:05:44.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.714 [2024-05-15 03:00:15.612365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.714 [2024-05-15 03:00:15.684570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.714 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.715 03:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.090 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.091 00:05:46.091 real 0m1.365s 00:05:46.091 user 0m1.263s 00:05:46.091 sys 0m0.116s 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.091 03:00:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:46.091 ************************************ 00:05:46.091 END TEST accel_decomp_mthread 00:05:46.091 ************************************ 00:05:46.091 03:00:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.091 03:00:16 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:46.091 03:00:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.091 03:00:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.091 ************************************ 00:05:46.091 START TEST accel_decomp_full_mthread 00:05:46.091 ************************************ 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:46.091 03:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:46.091 [2024-05-15 03:00:16.986896] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:46.091 [2024-05-15 03:00:16.986947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876677 ] 00:05:46.091 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.091 [2024-05-15 03:00:17.041275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.091 [2024-05-15 03:00:17.114220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.091 03:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.469 00:05:47.469 real 0m1.384s 00:05:47.469 user 0m1.285s 00:05:47.469 sys 0m0.112s 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.469 03:00:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:47.469 ************************************ 00:05:47.469 END TEST accel_decomp_full_mthread 00:05:47.469 ************************************ 00:05:47.469 03:00:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:47.469 03:00:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:47.469 03:00:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:47.469 03:00:18 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:47.469 03:00:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.469 03:00:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.469 03:00:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.469 03:00:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.469 03:00:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.469 03:00:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.469 03:00:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.469 03:00:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:47.469 03:00:18 accel -- accel/accel.sh@41 -- # jq -r . 00:05:47.469 ************************************ 00:05:47.469 START TEST accel_dif_functional_tests 00:05:47.469 ************************************ 00:05:47.469 03:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:47.469 [2024-05-15 03:00:18.463355] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:47.470 [2024-05-15 03:00:18.463391] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876948 ] 00:05:47.470 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.470 [2024-05-15 03:00:18.514378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.470 [2024-05-15 03:00:18.587021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.470 [2024-05-15 03:00:18.587120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.470 [2024-05-15 03:00:18.587122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.729 00:05:47.729 00:05:47.729 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.729 http://cunit.sourceforge.net/ 00:05:47.729 00:05:47.729 00:05:47.729 Suite: accel_dif 00:05:47.729 Test: verify: DIF generated, GUARD check ...passed 00:05:47.729 Test: verify: DIF generated, APPTAG check ...passed 00:05:47.729 Test: verify: DIF generated, REFTAG check ...passed 00:05:47.729 Test: verify: DIF not generated, GUARD check ...[2024-05-15 03:00:18.655695] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:47.729 [2024-05-15 03:00:18.655735] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:47.729 passed 00:05:47.729 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 03:00:18.655764] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:47.729 [2024-05-15 03:00:18.655778] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:47.729 passed 00:05:47.729 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 03:00:18.655798] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:47.729 [2024-05-15 03:00:18.655813] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:47.729 passed 00:05:47.729 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:47.729 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 03:00:18.655852] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:47.729 passed 00:05:47.729 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:47.729 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:47.729 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:47.729 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 03:00:18.655955] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:47.729 passed 00:05:47.729 Test: generate copy: DIF generated, GUARD check ...passed 00:05:47.729 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:47.729 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:47.729 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:47.729 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:47.729 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:47.729 Test: generate copy: iovecs-len validate ...[2024-05-15 03:00:18.656121] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:47.729 passed 00:05:47.729 Test: generate copy: buffer alignment validate ...passed 00:05:47.729 00:05:47.729 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.729 suites 1 1 n/a 0 0 00:05:47.729 tests 20 20 20 0 0 00:05:47.729 asserts 204 204 204 0 n/a 00:05:47.729 00:05:47.729 Elapsed time = 0.000 seconds 00:05:47.729 00:05:47.729 real 0m0.431s 00:05:47.729 user 0m0.647s 00:05:47.729 sys 0m0.141s 00:05:47.729 03:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.729 03:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 ************************************ 00:05:47.729 END TEST accel_dif_functional_tests 00:05:47.729 ************************************ 00:05:47.729 00:05:47.729 real 0m31.413s 00:05:47.729 user 0m35.103s 00:05:47.729 sys 0m4.109s 00:05:47.729 03:00:18 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.729 03:00:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 ************************************ 00:05:47.729 END TEST accel 00:05:47.729 ************************************ 00:05:47.988 03:00:18 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:47.988 03:00:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.988 03:00:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.988 03:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 START TEST accel_rpc 00:05:47.988 ************************************ 00:05:47.988 03:00:18 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:47.988 * Looking for test storage... 00:05:47.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:47.988 03:00:19 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.988 03:00:19 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=877020 00:05:47.988 03:00:19 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 877020 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 877020 ']' 00:05:47.988 03:00:19 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.988 03:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 [2024-05-15 03:00:19.084539] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:47.988 [2024-05-15 03:00:19.084583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877020 ] 00:05:47.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.988 [2024-05-15 03:00:19.138341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.247 [2024-05-15 03:00:19.218079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.815 03:00:19 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.815 03:00:19 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:48.815 03:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:48.815 03:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:48.815 03:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:48.815 03:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:48.815 03:00:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:48.815 03:00:19 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.815 03:00:19 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.815 03:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 ************************************ 00:05:48.815 START TEST accel_assign_opcode 00:05:48.815 ************************************ 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 [2024-05-15 03:00:19.940215] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:48.815 [2024-05-15 03:00:19.948229] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.815 03:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.073 software 00:05:49.073 00:05:49.073 real 0m0.240s 00:05:49.073 user 0m0.043s 00:05:49.073 sys 0m0.012s 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.073 03:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.073 ************************************ 00:05:49.073 END TEST accel_assign_opcode 00:05:49.073 ************************************ 00:05:49.073 03:00:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 877020 00:05:49.073 03:00:20 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 877020 ']' 00:05:49.073 03:00:20 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 877020 00:05:49.073 03:00:20 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:49.073 03:00:20 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:49.073 03:00:20 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 877020 00:05:49.331 03:00:20 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:49.331 03:00:20 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:49.331 03:00:20 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 877020' 00:05:49.331 killing process with pid 877020 00:05:49.331 03:00:20 accel_rpc -- common/autotest_common.sh@965 -- # kill 877020 00:05:49.331 03:00:20 accel_rpc -- common/autotest_common.sh@970 -- # wait 877020 00:05:49.590 00:05:49.590 real 0m1.628s 00:05:49.590 user 0m1.709s 00:05:49.590 sys 0m0.416s 00:05:49.590 03:00:20 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.590 03:00:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.590 ************************************ 00:05:49.590 END TEST accel_rpc 00:05:49.590 ************************************ 00:05:49.590 03:00:20 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.590 03:00:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.590 03:00:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.590 03:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.590 ************************************ 00:05:49.590 START TEST app_cmdline 00:05:49.590 ************************************ 00:05:49.590 03:00:20 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.590 * Looking for test storage... 00:05:49.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.847 03:00:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:49.847 03:00:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=877335 00:05:49.847 03:00:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 877335 00:05:49.847 03:00:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 877335 ']' 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.847 03:00:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.847 [2024-05-15 03:00:20.802017] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:05:49.847 [2024-05-15 03:00:20.802064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877335 ] 00:05:49.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.847 [2024-05-15 03:00:20.856637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.847 [2024-05-15 03:00:20.935592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:50.784 { 00:05:50.784 "version": "SPDK v24.05-pre git sha1 2b14ffc34", 00:05:50.784 "fields": { 00:05:50.784 "major": 24, 00:05:50.784 "minor": 5, 00:05:50.784 "patch": 0, 00:05:50.784 "suffix": "-pre", 00:05:50.784 "commit": "2b14ffc34" 00:05:50.784 } 00:05:50.784 } 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:50.784 03:00:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:50.784 03:00:21 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.043 request: 00:05:51.043 { 00:05:51.043 "method": "env_dpdk_get_mem_stats", 00:05:51.043 "req_id": 1 00:05:51.043 } 00:05:51.043 Got JSON-RPC error response 00:05:51.043 response: 00:05:51.043 { 00:05:51.043 "code": -32601, 00:05:51.043 "message": "Method not found" 00:05:51.043 } 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.043 03:00:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 877335 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 877335 ']' 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 877335 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.043 03:00:21 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 877335 00:05:51.043 03:00:22 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.043 03:00:22 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.043 03:00:22 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 877335' 00:05:51.043 killing process with pid 877335 00:05:51.043 03:00:22 app_cmdline -- common/autotest_common.sh@965 -- # kill 877335 00:05:51.043 03:00:22 app_cmdline -- common/autotest_common.sh@970 -- # wait 877335 00:05:51.302 00:05:51.302 real 0m1.703s 00:05:51.303 user 0m2.047s 00:05:51.303 sys 0m0.411s 00:05:51.303 03:00:22 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.303 03:00:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 ************************************ 00:05:51.303 END TEST app_cmdline 00:05:51.303 ************************************ 00:05:51.303 03:00:22 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.303 03:00:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.303 03:00:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.303 03:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 ************************************ 00:05:51.303 START TEST version 00:05:51.303 ************************************ 00:05:51.303 03:00:22 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.562 * Looking for test storage... 00:05:51.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.562 03:00:22 version -- app/version.sh@17 -- # get_header_version major 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:51.562 03:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.562 03:00:22 version -- app/version.sh@17 -- # major=24 00:05:51.562 03:00:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.562 03:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.562 03:00:22 version -- app/version.sh@18 -- # minor=5 00:05:51.562 03:00:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.562 03:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.562 03:00:22 version -- app/version.sh@19 -- # patch=0 00:05:51.562 03:00:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.562 03:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:51.562 03:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.562 03:00:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.562 03:00:22 version -- app/version.sh@22 -- # version=24.5 00:05:51.562 03:00:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.562 03:00:22 version -- app/version.sh@28 -- # version=24.5rc0 00:05:51.562 03:00:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:51.562 03:00:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.562 03:00:22 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:51.562 03:00:22 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:51.562 00:05:51.562 real 0m0.153s 00:05:51.562 user 0m0.082s 00:05:51.562 sys 0m0.106s 00:05:51.562 03:00:22 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.562 03:00:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.562 ************************************ 00:05:51.562 END TEST version 00:05:51.562 ************************************ 00:05:51.562 03:00:22 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@194 -- # uname -s 00:05:51.562 03:00:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:51.562 03:00:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.562 03:00:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.562 03:00:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:51.562 03:00:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.562 03:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.562 03:00:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:51.562 03:00:22 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:05:51.562 03:00:22 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.562 03:00:22 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:51.562 03:00:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.562 03:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.562 ************************************ 00:05:51.562 START TEST nvmf_tcp 00:05:51.562 ************************************ 00:05:51.562 03:00:22 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.820 * Looking for test storage... 00:05:51.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.820 03:00:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.820 03:00:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.820 03:00:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.820 03:00:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.820 03:00:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.820 03:00:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.820 03:00:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:51.820 03:00:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:51.820 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:51.821 03:00:22 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:51.821 03:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.821 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:51.821 03:00:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:51.821 03:00:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:51.821 03:00:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.821 03:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.821 ************************************ 00:05:51.821 START TEST nvmf_example 00:05:51.821 ************************************ 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:51.821 * Looking for test storage... 00:05:51.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:51.821 03:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:58.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:58.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:58.385 Found net devices under 0000:86:00.0: cvl_0_0 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:58.385 Found net devices under 0000:86:00.1: cvl_0_1 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:58.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:58.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:05:58.385 00:05:58.385 --- 10.0.0.2 ping statistics --- 00:05:58.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.385 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:58.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:58.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:05:58.385 00:05:58.385 --- 10.0.0.1 ping statistics --- 00:05:58.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.385 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=880941 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 880941 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 880941 ']' 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.385 03:00:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.385 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:58.643 03:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:58.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.615 Initializing NVMe Controllers 00:06:08.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:08.615 Initialization complete. Launching workers. 00:06:08.615 ======================================================== 00:06:08.615 Latency(us) 00:06:08.615 Device Information : IOPS MiB/s Average min max 00:06:08.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18165.00 70.96 3523.03 711.25 16143.99 00:06:08.615 ======================================================== 00:06:08.615 Total : 18165.00 70.96 3523.03 711.25 16143.99 00:06:08.615 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:08.615 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:08.615 rmmod nvme_tcp 00:06:08.873 rmmod nvme_fabrics 00:06:08.873 rmmod nvme_keyring 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 880941 ']' 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 880941 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 880941 ']' 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 880941 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 880941 00:06:08.873 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:08.874 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:08.874 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 880941' 00:06:08.874 killing process with pid 880941 00:06:08.874 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 880941 00:06:08.874 03:00:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 880941 00:06:09.197 nvmf threads initialize successfully 00:06:09.197 bdev subsystem init successfully 00:06:09.197 created a nvmf target service 00:06:09.197 create targets's poll groups done 00:06:09.197 all subsystems of target started 00:06:09.197 nvmf target is running 00:06:09.197 all subsystems of target stopped 00:06:09.197 destroy targets's poll groups done 00:06:09.197 destroyed the nvmf target service 00:06:09.197 bdev subsystem finish successfully 00:06:09.197 nvmf threads destroy successfully 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.197 03:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.100 00:06:11.100 real 0m19.319s 00:06:11.100 user 0m45.697s 00:06:11.100 sys 0m5.555s 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.100 03:00:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.100 ************************************ 00:06:11.100 END TEST nvmf_example 00:06:11.100 ************************************ 00:06:11.100 03:00:42 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.100 03:00:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:11.100 03:00:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.100 03:00:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.100 ************************************ 00:06:11.100 START TEST nvmf_filesystem 00:06:11.100 ************************************ 00:06:11.100 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.361 * Looking for test storage... 00:06:11.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:11.361 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:11.362 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:11.362 #define SPDK_CONFIG_H 00:06:11.362 #define SPDK_CONFIG_APPS 1 00:06:11.362 #define SPDK_CONFIG_ARCH native 00:06:11.362 #undef SPDK_CONFIG_ASAN 00:06:11.362 #undef SPDK_CONFIG_AVAHI 00:06:11.362 #undef SPDK_CONFIG_CET 00:06:11.362 #define SPDK_CONFIG_COVERAGE 1 00:06:11.362 #define SPDK_CONFIG_CROSS_PREFIX 00:06:11.362 #undef SPDK_CONFIG_CRYPTO 00:06:11.362 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:11.362 #undef SPDK_CONFIG_CUSTOMOCF 00:06:11.362 #undef SPDK_CONFIG_DAOS 00:06:11.362 #define SPDK_CONFIG_DAOS_DIR 00:06:11.362 #define SPDK_CONFIG_DEBUG 1 00:06:11.362 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:11.362 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.362 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:11.362 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:11.362 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:11.362 #undef SPDK_CONFIG_DPDK_UADK 00:06:11.362 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.362 #define SPDK_CONFIG_EXAMPLES 1 00:06:11.362 #undef SPDK_CONFIG_FC 00:06:11.362 #define SPDK_CONFIG_FC_PATH 00:06:11.362 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:11.362 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:11.362 #undef SPDK_CONFIG_FUSE 00:06:11.362 #undef SPDK_CONFIG_FUZZER 00:06:11.362 #define SPDK_CONFIG_FUZZER_LIB 00:06:11.362 #undef SPDK_CONFIG_GOLANG 00:06:11.362 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:11.362 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:11.362 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:11.362 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:11.362 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:11.362 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:11.362 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:11.362 #define SPDK_CONFIG_IDXD 1 00:06:11.362 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:11.362 #undef SPDK_CONFIG_IPSEC_MB 00:06:11.362 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:11.362 #define SPDK_CONFIG_ISAL 1 00:06:11.362 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:11.362 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:11.362 #define SPDK_CONFIG_LIBDIR 00:06:11.362 #undef SPDK_CONFIG_LTO 00:06:11.362 #define SPDK_CONFIG_MAX_LCORES 00:06:11.362 #define SPDK_CONFIG_NVME_CUSE 1 00:06:11.362 #undef SPDK_CONFIG_OCF 00:06:11.362 #define SPDK_CONFIG_OCF_PATH 00:06:11.362 #define SPDK_CONFIG_OPENSSL_PATH 00:06:11.362 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:11.362 #define SPDK_CONFIG_PGO_DIR 00:06:11.362 #undef SPDK_CONFIG_PGO_USE 00:06:11.363 #define SPDK_CONFIG_PREFIX /usr/local 00:06:11.363 #undef SPDK_CONFIG_RAID5F 00:06:11.363 #undef SPDK_CONFIG_RBD 00:06:11.363 #define SPDK_CONFIG_RDMA 1 00:06:11.363 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:11.363 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:11.363 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:11.363 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:11.363 #define SPDK_CONFIG_SHARED 1 00:06:11.363 #undef SPDK_CONFIG_SMA 00:06:11.363 #define SPDK_CONFIG_TESTS 1 00:06:11.363 #undef SPDK_CONFIG_TSAN 00:06:11.363 #define SPDK_CONFIG_UBLK 1 00:06:11.363 #define SPDK_CONFIG_UBSAN 1 00:06:11.363 #undef SPDK_CONFIG_UNIT_TESTS 00:06:11.363 #undef SPDK_CONFIG_URING 00:06:11.363 #define SPDK_CONFIG_URING_PATH 00:06:11.363 #undef SPDK_CONFIG_URING_ZNS 00:06:11.363 #undef SPDK_CONFIG_USDT 00:06:11.363 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:11.363 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:11.363 #define SPDK_CONFIG_VFIO_USER 1 00:06:11.363 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:11.363 #define SPDK_CONFIG_VHOST 1 00:06:11.363 #define SPDK_CONFIG_VIRTIO 1 00:06:11.363 #undef SPDK_CONFIG_VTUNE 00:06:11.363 #define SPDK_CONFIG_VTUNE_DIR 00:06:11.363 #define SPDK_CONFIG_WERROR 1 00:06:11.363 #define SPDK_CONFIG_WPDK_DIR 00:06:11.363 #undef SPDK_CONFIG_XNVME 00:06:11.363 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:11.363 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.364 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 883356 ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 883356 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.bnzdsm 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.bnzdsm/tests/target /tmp/spdk.bnzdsm 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=973762560 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4310667264 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=189127888896 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=195974311936 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=6846423040 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:11.365 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97983778816 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987153920 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=39185489920 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=39194865664 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9375744 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97986531328 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987158016 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=626688 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=19597426688 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=19597430784 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:11.366 * Looking for test storage... 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=189127888896 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=9061015552 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.366 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.367 03:00:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.635 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.635 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.635 03:00:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:16.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:06:16.635 00:06:16.635 --- 10.0.0.2 ping statistics --- 00:06:16.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.635 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:06:16.635 00:06:16.635 --- 10.0.0.1 ping statistics --- 00:06:16.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.635 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:16.635 03:00:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.636 ************************************ 00:06:16.636 START TEST nvmf_filesystem_no_in_capsule 00:06:16.636 ************************************ 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=886257 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 886257 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 886257 ']' 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.636 03:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:16.636 [2024-05-15 03:00:47.177381] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:16.636 [2024-05-15 03:00:47.177420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.636 [2024-05-15 03:00:47.232703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.636 [2024-05-15 03:00:47.307730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.636 [2024-05-15 03:00:47.307774] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.636 [2024-05-15 03:00:47.307781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.636 [2024-05-15 03:00:47.307787] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.636 [2024-05-15 03:00:47.307792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.636 [2024-05-15 03:00:47.307855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.636 [2024-05-15 03:00:47.307950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.636 [2024-05-15 03:00:47.307973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.636 [2024-05-15 03:00:47.307974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.895 [2024-05-15 03:00:48.044439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.895 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 Malloc1 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.154 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.155 [2024-05-15 03:00:48.190431] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:17.155 [2024-05-15 03:00:48.190680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:17.155 { 00:06:17.155 "name": "Malloc1", 00:06:17.155 "aliases": [ 00:06:17.155 "4aa615a4-70a7-4b3d-8567-2bf7ebe74847" 00:06:17.155 ], 00:06:17.155 "product_name": "Malloc disk", 00:06:17.155 "block_size": 512, 00:06:17.155 "num_blocks": 1048576, 00:06:17.155 "uuid": "4aa615a4-70a7-4b3d-8567-2bf7ebe74847", 00:06:17.155 "assigned_rate_limits": { 00:06:17.155 "rw_ios_per_sec": 0, 00:06:17.155 "rw_mbytes_per_sec": 0, 00:06:17.155 "r_mbytes_per_sec": 0, 00:06:17.155 "w_mbytes_per_sec": 0 00:06:17.155 }, 00:06:17.155 "claimed": true, 00:06:17.155 "claim_type": "exclusive_write", 00:06:17.155 "zoned": false, 00:06:17.155 "supported_io_types": { 00:06:17.155 "read": true, 00:06:17.155 "write": true, 00:06:17.155 "unmap": true, 00:06:17.155 "write_zeroes": true, 00:06:17.155 "flush": true, 00:06:17.155 "reset": true, 00:06:17.155 "compare": false, 00:06:17.155 "compare_and_write": false, 00:06:17.155 "abort": true, 00:06:17.155 "nvme_admin": false, 00:06:17.155 "nvme_io": false 00:06:17.155 }, 00:06:17.155 "memory_domains": [ 00:06:17.155 { 00:06:17.155 "dma_device_id": "system", 00:06:17.155 "dma_device_type": 1 00:06:17.155 }, 00:06:17.155 { 00:06:17.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.155 "dma_device_type": 2 00:06:17.155 } 00:06:17.155 ], 00:06:17.155 "driver_specific": {} 00:06:17.155 } 00:06:17.155 ]' 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:17.155 03:00:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:18.529 03:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:18.529 03:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:18.529 03:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:18.529 03:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:18.529 03:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:20.432 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:20.999 03:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:21.564 03:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.499 ************************************ 00:06:22.499 START TEST filesystem_ext4 00:06:22.499 ************************************ 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:22.499 03:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:22.499 mke2fs 1.46.5 (30-Dec-2021) 00:06:22.757 Discarding device blocks: 0/522240 done 00:06:22.757 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:22.757 Filesystem UUID: 170e854f-e04c-4f14-8885-1a5ed6445888 00:06:22.757 Superblock backups stored on blocks: 00:06:22.757 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:22.757 00:06:22.757 Allocating group tables: 0/64 done 00:06:22.757 Writing inode tables: 0/64 done 00:06:23.016 Creating journal (8192 blocks): done 00:06:24.099 Writing superblocks and filesystem accounting information: 0/64 done 00:06:24.099 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 886257 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.099 00:06:24.099 real 0m1.587s 00:06:24.099 user 0m0.027s 00:06:24.099 sys 0m0.063s 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.099 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:24.099 ************************************ 00:06:24.099 END TEST filesystem_ext4 00:06:24.099 ************************************ 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.365 ************************************ 00:06:24.365 START TEST filesystem_btrfs 00:06:24.365 ************************************ 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:24.365 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:24.624 btrfs-progs v6.6.2 00:06:24.624 See https://btrfs.readthedocs.io for more information. 00:06:24.624 00:06:24.624 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:24.624 NOTE: several default settings have changed in version 5.15, please make sure 00:06:24.624 this does not affect your deployments: 00:06:24.624 - DUP for metadata (-m dup) 00:06:24.624 - enabled no-holes (-O no-holes) 00:06:24.624 - enabled free-space-tree (-R free-space-tree) 00:06:24.624 00:06:24.624 Label: (null) 00:06:24.624 UUID: 90acbbd1-3a57-4a78-bc54-a480374e5cdc 00:06:24.624 Node size: 16384 00:06:24.624 Sector size: 4096 00:06:24.624 Filesystem size: 510.00MiB 00:06:24.624 Block group profiles: 00:06:24.624 Data: single 8.00MiB 00:06:24.624 Metadata: DUP 32.00MiB 00:06:24.624 System: DUP 8.00MiB 00:06:24.624 SSD detected: yes 00:06:24.624 Zoned device: no 00:06:24.624 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:24.624 Runtime features: free-space-tree 00:06:24.624 Checksum: crc32c 00:06:24.624 Number of devices: 1 00:06:24.624 Devices: 00:06:24.624 ID SIZE PATH 00:06:24.624 1 510.00MiB /dev/nvme0n1p1 00:06:24.624 00:06:24.624 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:24.624 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.882 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 886257 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.883 00:06:24.883 real 0m0.593s 00:06:24.883 user 0m0.027s 00:06:24.883 sys 0m0.122s 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:24.883 ************************************ 00:06:24.883 END TEST filesystem_btrfs 00:06:24.883 ************************************ 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.883 ************************************ 00:06:24.883 START TEST filesystem_xfs 00:06:24.883 ************************************ 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:24.883 03:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:25.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:25.141 = sectsz=512 attr=2, projid32bit=1 00:06:25.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:25.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:25.141 data = bsize=4096 blocks=130560, imaxpct=25 00:06:25.141 = sunit=0 swidth=0 blks 00:06:25.141 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:25.141 log =internal log bsize=4096 blocks=16384, version=2 00:06:25.141 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:25.141 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:25.705 Discarding blocks...Done. 00:06:25.705 03:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:25.705 03:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 886257 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.235 00:06:28.235 real 0m3.301s 00:06:28.235 user 0m0.027s 00:06:28.235 sys 0m0.067s 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:28.235 ************************************ 00:06:28.235 END TEST filesystem_xfs 00:06:28.235 ************************************ 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:28.235 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:28.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 886257 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 886257 ']' 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 886257 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 886257 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 886257' 00:06:28.494 killing process with pid 886257 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 886257 00:06:28.494 [2024-05-15 03:00:59.515474] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:28.494 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 886257 00:06:28.754 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:28.754 00:06:28.754 real 0m12.759s 00:06:28.754 user 0m50.131s 00:06:28.754 sys 0m1.215s 00:06:28.754 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.754 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.754 ************************************ 00:06:28.754 END TEST nvmf_filesystem_no_in_capsule 00:06:28.754 ************************************ 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.012 ************************************ 00:06:29.012 START TEST nvmf_filesystem_in_capsule 00:06:29.012 ************************************ 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=888669 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 888669 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 888669 ']' 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.012 03:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.012 [2024-05-15 03:01:00.013899] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:29.012 [2024-05-15 03:01:00.013944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.012 [2024-05-15 03:01:00.077040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.012 [2024-05-15 03:01:00.154048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.012 [2024-05-15 03:01:00.154089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.012 [2024-05-15 03:01:00.154098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.012 [2024-05-15 03:01:00.154104] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.012 [2024-05-15 03:01:00.154109] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.012 [2024-05-15 03:01:00.154198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.012 [2024-05-15 03:01:00.154282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.012 [2024-05-15 03:01:00.154371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.012 [2024-05-15 03:01:00.154372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 [2024-05-15 03:01:00.853255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 Malloc1 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 [2024-05-15 03:01:00.999779] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:29.948 [2024-05-15 03:01:01.000023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:29.948 { 00:06:29.948 "name": "Malloc1", 00:06:29.948 "aliases": [ 00:06:29.948 "1421f78f-16e5-467c-b588-1fa26b4ac65e" 00:06:29.948 ], 00:06:29.948 "product_name": "Malloc disk", 00:06:29.948 "block_size": 512, 00:06:29.948 "num_blocks": 1048576, 00:06:29.948 "uuid": "1421f78f-16e5-467c-b588-1fa26b4ac65e", 00:06:29.948 "assigned_rate_limits": { 00:06:29.948 "rw_ios_per_sec": 0, 00:06:29.948 "rw_mbytes_per_sec": 0, 00:06:29.948 "r_mbytes_per_sec": 0, 00:06:29.948 "w_mbytes_per_sec": 0 00:06:29.948 }, 00:06:29.948 "claimed": true, 00:06:29.948 "claim_type": "exclusive_write", 00:06:29.948 "zoned": false, 00:06:29.948 "supported_io_types": { 00:06:29.948 "read": true, 00:06:29.948 "write": true, 00:06:29.948 "unmap": true, 00:06:29.948 "write_zeroes": true, 00:06:29.948 "flush": true, 00:06:29.948 "reset": true, 00:06:29.948 "compare": false, 00:06:29.948 "compare_and_write": false, 00:06:29.948 "abort": true, 00:06:29.948 "nvme_admin": false, 00:06:29.948 "nvme_io": false 00:06:29.948 }, 00:06:29.948 "memory_domains": [ 00:06:29.948 { 00:06:29.948 "dma_device_id": "system", 00:06:29.948 "dma_device_type": 1 00:06:29.948 }, 00:06:29.948 { 00:06:29.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.948 "dma_device_type": 2 00:06:29.948 } 00:06:29.948 ], 00:06:29.948 "driver_specific": {} 00:06:29.948 } 00:06:29.948 ]' 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:29.948 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:30.209 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:30.209 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:30.209 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:30.209 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:30.209 03:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:31.187 03:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:31.187 03:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:31.187 03:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:31.187 03:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:31.187 03:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:33.100 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:33.358 03:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:34.292 03:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.228 ************************************ 00:06:35.228 START TEST filesystem_in_capsule_ext4 00:06:35.228 ************************************ 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:35.228 03:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:35.228 mke2fs 1.46.5 (30-Dec-2021) 00:06:35.228 Discarding device blocks: 0/522240 done 00:06:35.228 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:35.228 Filesystem UUID: 72282311-663b-45c9-bb95-a453cbe4e36c 00:06:35.228 Superblock backups stored on blocks: 00:06:35.228 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:35.228 00:06:35.228 Allocating group tables: 0/64 done 00:06:35.228 Writing inode tables: 0/64 done 00:06:35.795 Creating journal (8192 blocks): done 00:06:36.620 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:36.620 00:06:36.620 03:01:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:36.620 03:01:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 888669 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.555 00:06:37.555 real 0m2.516s 00:06:37.555 user 0m0.031s 00:06:37.555 sys 0m0.055s 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.555 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:37.555 ************************************ 00:06:37.555 END TEST filesystem_in_capsule_ext4 00:06:37.555 ************************************ 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.814 ************************************ 00:06:37.814 START TEST filesystem_in_capsule_btrfs 00:06:37.814 ************************************ 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:37.814 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:38.073 btrfs-progs v6.6.2 00:06:38.073 See https://btrfs.readthedocs.io for more information. 00:06:38.073 00:06:38.073 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:38.073 NOTE: several default settings have changed in version 5.15, please make sure 00:06:38.073 this does not affect your deployments: 00:06:38.073 - DUP for metadata (-m dup) 00:06:38.073 - enabled no-holes (-O no-holes) 00:06:38.073 - enabled free-space-tree (-R free-space-tree) 00:06:38.073 00:06:38.073 Label: (null) 00:06:38.073 UUID: 3a85fd47-0671-4e1c-85be-c67b27d643e1 00:06:38.073 Node size: 16384 00:06:38.073 Sector size: 4096 00:06:38.073 Filesystem size: 510.00MiB 00:06:38.073 Block group profiles: 00:06:38.073 Data: single 8.00MiB 00:06:38.073 Metadata: DUP 32.00MiB 00:06:38.073 System: DUP 8.00MiB 00:06:38.073 SSD detected: yes 00:06:38.073 Zoned device: no 00:06:38.073 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:38.073 Runtime features: free-space-tree 00:06:38.073 Checksum: crc32c 00:06:38.073 Number of devices: 1 00:06:38.073 Devices: 00:06:38.073 ID SIZE PATH 00:06:38.073 1 510.00MiB /dev/nvme0n1p1 00:06:38.073 00:06:38.073 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:38.073 03:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 888669 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:38.639 00:06:38.639 real 0m0.834s 00:06:38.639 user 0m0.037s 00:06:38.639 sys 0m0.115s 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:38.639 ************************************ 00:06:38.639 END TEST filesystem_in_capsule_btrfs 00:06:38.639 ************************************ 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.639 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.639 ************************************ 00:06:38.639 START TEST filesystem_in_capsule_xfs 00:06:38.640 ************************************ 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:38.640 03:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:38.640 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:38.640 = sectsz=512 attr=2, projid32bit=1 00:06:38.640 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:38.640 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:38.640 data = bsize=4096 blocks=130560, imaxpct=25 00:06:38.640 = sunit=0 swidth=0 blks 00:06:38.640 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:38.640 log =internal log bsize=4096 blocks=16384, version=2 00:06:38.640 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:38.640 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:40.013 Discarding blocks...Done. 00:06:40.013 03:01:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:40.013 03:01:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 888669 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.554 00:06:42.554 real 0m3.658s 00:06:42.554 user 0m0.024s 00:06:42.554 sys 0m0.070s 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.554 ************************************ 00:06:42.554 END TEST filesystem_in_capsule_xfs 00:06:42.554 ************************************ 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:42.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:42.554 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 888669 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 888669 ']' 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 888669 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 888669 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 888669' 00:06:42.555 killing process with pid 888669 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 888669 00:06:42.555 [2024-05-15 03:01:13.676092] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:42.555 03:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 888669 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:43.122 00:06:43.122 real 0m14.084s 00:06:43.122 user 0m55.293s 00:06:43.122 sys 0m1.276s 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.122 ************************************ 00:06:43.122 END TEST nvmf_filesystem_in_capsule 00:06:43.122 ************************************ 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:43.122 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.123 rmmod nvme_tcp 00:06:43.123 rmmod nvme_fabrics 00:06:43.123 rmmod nvme_keyring 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.123 03:01:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.654 03:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.654 00:06:45.654 real 0m33.970s 00:06:45.654 user 1m46.733s 00:06:45.654 sys 0m6.082s 00:06:45.654 03:01:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.654 03:01:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.654 ************************************ 00:06:45.654 END TEST nvmf_filesystem 00:06:45.654 ************************************ 00:06:45.654 03:01:16 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.654 03:01:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:45.654 03:01:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.654 03:01:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.654 ************************************ 00:06:45.654 START TEST nvmf_target_discovery 00:06:45.654 ************************************ 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.654 * Looking for test storage... 00:06:45.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.654 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.655 03:01:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:50.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:50.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:50.927 Found net devices under 0000:86:00.0: cvl_0_0 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:50.927 Found net devices under 0000:86:00.1: cvl_0_1 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.927 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:50.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:06:50.928 00:06:50.928 --- 10.0.0.2 ping statistics --- 00:06:50.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.928 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:06:50.928 00:06:50.928 --- 10.0.0.1 ping statistics --- 00:06:50.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.928 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=894575 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 894575 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 894575 ']' 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.928 03:01:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.928 [2024-05-15 03:01:21.807904] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:06:50.928 [2024-05-15 03:01:21.807946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.928 [2024-05-15 03:01:21.866361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.928 [2024-05-15 03:01:21.947103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.928 [2024-05-15 03:01:21.947139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.928 [2024-05-15 03:01:21.947146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.928 [2024-05-15 03:01:21.947152] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.928 [2024-05-15 03:01:21.947157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.928 [2024-05-15 03:01:21.947201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.928 [2024-05-15 03:01:21.947298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.928 [2024-05-15 03:01:21.947373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.928 [2024-05-15 03:01:21.947374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.495 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 [2024-05-15 03:01:22.660471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 Null1 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 [2024-05-15 03:01:22.705771] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:51.755 [2024-05-15 03:01:22.705960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 Null2 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 Null3 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 Null4 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.755 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.756 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:06:52.017 00:06:52.017 Discovery Log Number of Records 6, Generation counter 6 00:06:52.017 =====Discovery Log Entry 0====== 00:06:52.017 trtype: tcp 00:06:52.017 adrfam: ipv4 00:06:52.017 subtype: current discovery subsystem 00:06:52.017 treq: not required 00:06:52.017 portid: 0 00:06:52.017 trsvcid: 4420 00:06:52.017 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.017 traddr: 10.0.0.2 00:06:52.017 eflags: explicit discovery connections, duplicate discovery information 00:06:52.017 sectype: none 00:06:52.017 =====Discovery Log Entry 1====== 00:06:52.017 trtype: tcp 00:06:52.017 adrfam: ipv4 00:06:52.017 subtype: nvme subsystem 00:06:52.017 treq: not required 00:06:52.017 portid: 0 00:06:52.017 trsvcid: 4420 00:06:52.017 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:52.017 traddr: 10.0.0.2 00:06:52.017 eflags: none 00:06:52.017 sectype: none 00:06:52.017 =====Discovery Log Entry 2====== 00:06:52.017 trtype: tcp 00:06:52.017 adrfam: ipv4 00:06:52.017 subtype: nvme subsystem 00:06:52.017 treq: not required 00:06:52.017 portid: 0 00:06:52.017 trsvcid: 4420 00:06:52.017 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:52.017 traddr: 10.0.0.2 00:06:52.017 eflags: none 00:06:52.017 sectype: none 00:06:52.017 =====Discovery Log Entry 3====== 00:06:52.017 trtype: tcp 00:06:52.017 adrfam: ipv4 00:06:52.017 subtype: nvme subsystem 00:06:52.017 treq: not required 00:06:52.017 portid: 0 00:06:52.017 trsvcid: 4420 00:06:52.017 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:52.017 traddr: 10.0.0.2 00:06:52.017 eflags: none 00:06:52.017 sectype: none 00:06:52.017 =====Discovery Log Entry 4====== 00:06:52.017 trtype: tcp 00:06:52.018 adrfam: ipv4 00:06:52.018 subtype: nvme subsystem 00:06:52.018 treq: not required 00:06:52.018 portid: 0 00:06:52.018 trsvcid: 4420 00:06:52.018 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:52.018 traddr: 10.0.0.2 00:06:52.018 eflags: none 00:06:52.018 sectype: none 00:06:52.018 =====Discovery Log Entry 5====== 00:06:52.018 trtype: tcp 00:06:52.018 adrfam: ipv4 00:06:52.018 subtype: discovery subsystem referral 00:06:52.018 treq: not required 00:06:52.018 portid: 0 00:06:52.018 trsvcid: 4430 00:06:52.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:52.018 traddr: 10.0.0.2 00:06:52.018 eflags: none 00:06:52.018 sectype: none 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:52.018 Perform nvmf subsystem discovery via RPC 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.018 [ 00:06:52.018 { 00:06:52.018 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:52.018 "subtype": "Discovery", 00:06:52.018 "listen_addresses": [ 00:06:52.018 { 00:06:52.018 "trtype": "TCP", 00:06:52.018 "adrfam": "IPv4", 00:06:52.018 "traddr": "10.0.0.2", 00:06:52.018 "trsvcid": "4420" 00:06:52.018 } 00:06:52.018 ], 00:06:52.018 "allow_any_host": true, 00:06:52.018 "hosts": [] 00:06:52.018 }, 00:06:52.018 { 00:06:52.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:52.018 "subtype": "NVMe", 00:06:52.018 "listen_addresses": [ 00:06:52.018 { 00:06:52.018 "trtype": "TCP", 00:06:52.018 "adrfam": "IPv4", 00:06:52.018 "traddr": "10.0.0.2", 00:06:52.018 "trsvcid": "4420" 00:06:52.018 } 00:06:52.018 ], 00:06:52.018 "allow_any_host": true, 00:06:52.018 "hosts": [], 00:06:52.018 "serial_number": "SPDK00000000000001", 00:06:52.018 "model_number": "SPDK bdev Controller", 00:06:52.018 "max_namespaces": 32, 00:06:52.018 "min_cntlid": 1, 00:06:52.018 "max_cntlid": 65519, 00:06:52.018 "namespaces": [ 00:06:52.018 { 00:06:52.018 "nsid": 1, 00:06:52.018 "bdev_name": "Null1", 00:06:52.018 "name": "Null1", 00:06:52.018 "nguid": "1EFBFAC6BCFE4F19AD3B4DE37BA158E1", 00:06:52.018 "uuid": "1efbfac6-bcfe-4f19-ad3b-4de37ba158e1" 00:06:52.018 } 00:06:52.018 ] 00:06:52.018 }, 00:06:52.018 { 00:06:52.018 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:52.018 "subtype": "NVMe", 00:06:52.018 "listen_addresses": [ 00:06:52.018 { 00:06:52.018 "trtype": "TCP", 00:06:52.018 "adrfam": "IPv4", 00:06:52.018 "traddr": "10.0.0.2", 00:06:52.018 "trsvcid": "4420" 00:06:52.018 } 00:06:52.018 ], 00:06:52.018 "allow_any_host": true, 00:06:52.018 "hosts": [], 00:06:52.018 "serial_number": "SPDK00000000000002", 00:06:52.018 "model_number": "SPDK bdev Controller", 00:06:52.018 "max_namespaces": 32, 00:06:52.018 "min_cntlid": 1, 00:06:52.018 "max_cntlid": 65519, 00:06:52.018 "namespaces": [ 00:06:52.018 { 00:06:52.018 "nsid": 1, 00:06:52.018 "bdev_name": "Null2", 00:06:52.018 "name": "Null2", 00:06:52.018 "nguid": "1219EE87EA8C4E50AED37A0568776A22", 00:06:52.018 "uuid": "1219ee87-ea8c-4e50-aed3-7a0568776a22" 00:06:52.018 } 00:06:52.018 ] 00:06:52.018 }, 00:06:52.018 { 00:06:52.018 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:52.018 "subtype": "NVMe", 00:06:52.018 "listen_addresses": [ 00:06:52.018 { 00:06:52.018 "trtype": "TCP", 00:06:52.018 "adrfam": "IPv4", 00:06:52.018 "traddr": "10.0.0.2", 00:06:52.018 "trsvcid": "4420" 00:06:52.018 } 00:06:52.018 ], 00:06:52.018 "allow_any_host": true, 00:06:52.018 "hosts": [], 00:06:52.018 "serial_number": "SPDK00000000000003", 00:06:52.018 "model_number": "SPDK bdev Controller", 00:06:52.018 "max_namespaces": 32, 00:06:52.018 "min_cntlid": 1, 00:06:52.018 "max_cntlid": 65519, 00:06:52.018 "namespaces": [ 00:06:52.018 { 00:06:52.018 "nsid": 1, 00:06:52.018 "bdev_name": "Null3", 00:06:52.018 "name": "Null3", 00:06:52.018 "nguid": "A7A4455443024D4C9CEE163C3FF331FE", 00:06:52.018 "uuid": "a7a44554-4302-4d4c-9cee-163c3ff331fe" 00:06:52.018 } 00:06:52.018 ] 00:06:52.018 }, 00:06:52.018 { 00:06:52.018 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:52.018 "subtype": "NVMe", 00:06:52.018 "listen_addresses": [ 00:06:52.018 { 00:06:52.018 "trtype": "TCP", 00:06:52.018 "adrfam": "IPv4", 00:06:52.018 "traddr": "10.0.0.2", 00:06:52.018 "trsvcid": "4420" 00:06:52.018 } 00:06:52.018 ], 00:06:52.018 "allow_any_host": true, 00:06:52.018 "hosts": [], 00:06:52.018 "serial_number": "SPDK00000000000004", 00:06:52.018 "model_number": "SPDK bdev Controller", 00:06:52.018 "max_namespaces": 32, 00:06:52.018 "min_cntlid": 1, 00:06:52.018 "max_cntlid": 65519, 00:06:52.018 "namespaces": [ 00:06:52.018 { 00:06:52.018 "nsid": 1, 00:06:52.018 "bdev_name": "Null4", 00:06:52.018 "name": "Null4", 00:06:52.018 "nguid": "41B0812F0A6C44B9BD54E84B0743AE6A", 00:06:52.018 "uuid": "41b0812f-0a6c-44b9-bd54-e84b0743ae6a" 00:06:52.018 } 00:06:52.018 ] 00:06:52.018 } 00:06:52.018 ] 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.018 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.019 rmmod nvme_tcp 00:06:52.019 rmmod nvme_fabrics 00:06:52.019 rmmod nvme_keyring 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 894575 ']' 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 894575 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 894575 ']' 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 894575 00:06:52.019 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 894575 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 894575' 00:06:52.280 killing process with pid 894575 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 894575 00:06:52.280 [2024-05-15 03:01:23.218044] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 894575 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.280 03:01:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.829 03:01:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.829 00:06:54.829 real 0m9.206s 00:06:54.829 user 0m7.430s 00:06:54.829 sys 0m4.402s 00:06:54.829 03:01:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.829 03:01:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:54.829 ************************************ 00:06:54.829 END TEST nvmf_target_discovery 00:06:54.829 ************************************ 00:06:54.829 03:01:25 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.829 03:01:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:54.829 03:01:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.829 03:01:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.829 ************************************ 00:06:54.829 START TEST nvmf_referrals 00:06:54.829 ************************************ 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.829 * Looking for test storage... 00:06:54.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:54.829 03:01:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:54.830 03:01:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:00.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:00.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:00.119 Found net devices under 0000:86:00.0: cvl_0_0 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:00.119 Found net devices under 0000:86:00.1: cvl_0_1 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:07:00.119 00:07:00.119 --- 10.0.0.2 ping statistics --- 00:07:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.119 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:00.119 00:07:00.119 --- 10.0.0.1 ping statistics --- 00:07:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.119 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.119 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=898299 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 898299 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 898299 ']' 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.378 03:01:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.378 [2024-05-15 03:01:31.364939] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:00.378 [2024-05-15 03:01:31.364982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.378 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.378 [2024-05-15 03:01:31.422051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.378 [2024-05-15 03:01:31.494800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.378 [2024-05-15 03:01:31.494840] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.378 [2024-05-15 03:01:31.494847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.378 [2024-05-15 03:01:31.494852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.378 [2024-05-15 03:01:31.494857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.378 [2024-05-15 03:01:31.494904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.378 [2024-05-15 03:01:31.495003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.378 [2024-05-15 03:01:31.495065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.378 [2024-05-15 03:01:31.495067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 [2024-05-15 03:01:32.211369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 [2024-05-15 03:01:32.224576] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:01.316 [2024-05-15 03:01:32.224791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.316 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.575 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:01.576 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:01.835 03:01:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.094 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.353 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.611 rmmod nvme_tcp 00:07:02.611 rmmod nvme_fabrics 00:07:02.611 rmmod nvme_keyring 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 898299 ']' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 898299 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 898299 ']' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 898299 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.611 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 898299 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 898299' 00:07:02.871 killing process with pid 898299 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 898299 00:07:02.871 [2024-05-15 03:01:33.779441] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 898299 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.871 03:01:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.407 03:01:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.407 00:07:05.407 real 0m10.488s 00:07:05.407 user 0m12.288s 00:07:05.407 sys 0m4.841s 00:07:05.407 03:01:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.407 03:01:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.407 ************************************ 00:07:05.407 END TEST nvmf_referrals 00:07:05.407 ************************************ 00:07:05.407 03:01:36 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.407 03:01:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.407 03:01:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.407 03:01:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.407 ************************************ 00:07:05.407 START TEST nvmf_connect_disconnect 00:07:05.407 ************************************ 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:05.407 * Looking for test storage... 00:07:05.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.407 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.408 03:01:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.685 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:10.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:10.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:10.686 Found net devices under 0000:86:00.0: cvl_0_0 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:10.686 Found net devices under 0000:86:00.1: cvl_0_1 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:07:10.686 00:07:10.686 --- 10.0.0.2 ping statistics --- 00:07:10.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.686 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:07:10.686 00:07:10.686 --- 10.0.0.1 ping statistics --- 00:07:10.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.686 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=902351 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 902351 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 902351 ']' 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.686 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.687 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.687 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.687 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.687 03:01:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 [2024-05-15 03:01:41.683910] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:10.687 [2024-05-15 03:01:41.683953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.687 [2024-05-15 03:01:41.739585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.687 [2024-05-15 03:01:41.813306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.687 [2024-05-15 03:01:41.813347] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.687 [2024-05-15 03:01:41.813354] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.687 [2024-05-15 03:01:41.813360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.687 [2024-05-15 03:01:41.813365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.687 [2024-05-15 03:01:41.813407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.687 [2024-05-15 03:01:41.813501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.687 [2024-05-15 03:01:41.813521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.687 [2024-05-15 03:01:41.813522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.622 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.622 [2024-05-15 03:01:42.526437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:11.623 [2024-05-15 03:01:42.578201] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:11.623 [2024-05-15 03:01:42.578430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:11.623 03:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:14.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.104 rmmod nvme_tcp 00:07:28.104 rmmod nvme_fabrics 00:07:28.104 rmmod nvme_keyring 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 902351 ']' 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 902351 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 902351 ']' 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 902351 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 902351 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 902351' 00:07:28.104 killing process with pid 902351 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 902351 00:07:28.104 [2024-05-15 03:01:58.937617] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:28.104 03:01:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 902351 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.104 03:01:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.640 03:02:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.640 00:07:30.640 real 0m25.111s 00:07:30.640 user 1m10.322s 00:07:30.640 sys 0m5.329s 00:07:30.640 03:02:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.640 03:02:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.640 ************************************ 00:07:30.640 END TEST nvmf_connect_disconnect 00:07:30.640 ************************************ 00:07:30.640 03:02:01 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:30.640 03:02:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.640 03:02:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.640 03:02:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.640 ************************************ 00:07:30.640 START TEST nvmf_multitarget 00:07:30.640 ************************************ 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:30.640 * Looking for test storage... 00:07:30.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.640 03:02:01 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.641 03:02:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.923 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.924 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.924 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:35.924 00:07:35.924 --- 10.0.0.2 ping statistics --- 00:07:35.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.924 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:35.924 00:07:35.924 --- 10.0.0.1 ping statistics --- 00:07:35.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.924 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=908736 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 908736 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 908736 ']' 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.924 03:02:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.924 [2024-05-15 03:02:06.474355] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:35.924 [2024-05-15 03:02:06.474400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.924 [2024-05-15 03:02:06.533913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.924 [2024-05-15 03:02:06.614561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.924 [2024-05-15 03:02:06.614596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.924 [2024-05-15 03:02:06.614606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.924 [2024-05-15 03:02:06.614612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.924 [2024-05-15 03:02:06.614617] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.924 [2024-05-15 03:02:06.614660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.924 [2024-05-15 03:02:06.614757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.924 [2024-05-15 03:02:06.614771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.924 [2024-05-15 03:02:06.614772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:36.182 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:36.440 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:36.440 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:36.440 "nvmf_tgt_1" 00:07:36.440 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:36.698 "nvmf_tgt_2" 00:07:36.698 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:36.698 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:36.698 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:36.698 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:36.698 true 00:07:36.698 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:36.956 true 00:07:36.956 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:36.956 03:02:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.956 rmmod nvme_tcp 00:07:36.956 rmmod nvme_fabrics 00:07:36.956 rmmod nvme_keyring 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 908736 ']' 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 908736 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 908736 ']' 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 908736 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.956 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 908736 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 908736' 00:07:37.215 killing process with pid 908736 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 908736 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 908736 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.215 03:02:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.746 03:02:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.746 00:07:39.746 real 0m9.120s 00:07:39.746 user 0m9.024s 00:07:39.746 sys 0m4.165s 00:07:39.746 03:02:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.746 03:02:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:39.746 ************************************ 00:07:39.746 END TEST nvmf_multitarget 00:07:39.746 ************************************ 00:07:39.746 03:02:10 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:39.746 03:02:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.746 03:02:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.746 03:02:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.746 ************************************ 00:07:39.746 START TEST nvmf_rpc 00:07:39.746 ************************************ 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:39.746 * Looking for test storage... 00:07:39.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.746 03:02:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.008 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.009 03:02:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:45.009 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:45.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:45.009 Found net devices under 0000:86:00.0: cvl_0_0 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:45.009 Found net devices under 0000:86:00.1: cvl_0_1 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.009 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:07:45.266 00:07:45.266 --- 10.0.0.2 ping statistics --- 00:07:45.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.266 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:07:45.266 00:07:45.266 --- 10.0.0.1 ping statistics --- 00:07:45.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.266 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=912523 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 912523 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 912523 ']' 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.266 03:02:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.266 [2024-05-15 03:02:16.292508] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:07:45.266 [2024-05-15 03:02:16.292549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.266 [2024-05-15 03:02:16.349075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.523 [2024-05-15 03:02:16.429150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.523 [2024-05-15 03:02:16.429183] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.523 [2024-05-15 03:02:16.429194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.523 [2024-05-15 03:02:16.429200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.523 [2024-05-15 03:02:16.429205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.523 [2024-05-15 03:02:16.429238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.523 [2024-05-15 03:02:16.429313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.523 [2024-05-15 03:02:16.429375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.523 [2024-05-15 03:02:16.429376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:46.088 "tick_rate": 2300000000, 00:07:46.088 "poll_groups": [ 00:07:46.088 { 00:07:46.088 "name": "nvmf_tgt_poll_group_000", 00:07:46.088 "admin_qpairs": 0, 00:07:46.088 "io_qpairs": 0, 00:07:46.088 "current_admin_qpairs": 0, 00:07:46.088 "current_io_qpairs": 0, 00:07:46.088 "pending_bdev_io": 0, 00:07:46.088 "completed_nvme_io": 0, 00:07:46.088 "transports": [] 00:07:46.088 }, 00:07:46.088 { 00:07:46.088 "name": "nvmf_tgt_poll_group_001", 00:07:46.088 "admin_qpairs": 0, 00:07:46.088 "io_qpairs": 0, 00:07:46.088 "current_admin_qpairs": 0, 00:07:46.088 "current_io_qpairs": 0, 00:07:46.088 "pending_bdev_io": 0, 00:07:46.088 "completed_nvme_io": 0, 00:07:46.088 "transports": [] 00:07:46.088 }, 00:07:46.088 { 00:07:46.088 "name": "nvmf_tgt_poll_group_002", 00:07:46.088 "admin_qpairs": 0, 00:07:46.088 "io_qpairs": 0, 00:07:46.088 "current_admin_qpairs": 0, 00:07:46.088 "current_io_qpairs": 0, 00:07:46.088 "pending_bdev_io": 0, 00:07:46.088 "completed_nvme_io": 0, 00:07:46.088 "transports": [] 00:07:46.088 }, 00:07:46.088 { 00:07:46.088 "name": "nvmf_tgt_poll_group_003", 00:07:46.088 "admin_qpairs": 0, 00:07:46.088 "io_qpairs": 0, 00:07:46.088 "current_admin_qpairs": 0, 00:07:46.088 "current_io_qpairs": 0, 00:07:46.088 "pending_bdev_io": 0, 00:07:46.088 "completed_nvme_io": 0, 00:07:46.088 "transports": [] 00:07:46.088 } 00:07:46.088 ] 00:07:46.088 }' 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:46.088 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.346 [2024-05-15 03:02:17.254648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:46.346 "tick_rate": 2300000000, 00:07:46.346 "poll_groups": [ 00:07:46.346 { 00:07:46.346 "name": "nvmf_tgt_poll_group_000", 00:07:46.346 "admin_qpairs": 0, 00:07:46.346 "io_qpairs": 0, 00:07:46.346 "current_admin_qpairs": 0, 00:07:46.346 "current_io_qpairs": 0, 00:07:46.346 "pending_bdev_io": 0, 00:07:46.346 "completed_nvme_io": 0, 00:07:46.346 "transports": [ 00:07:46.346 { 00:07:46.346 "trtype": "TCP" 00:07:46.346 } 00:07:46.346 ] 00:07:46.346 }, 00:07:46.346 { 00:07:46.346 "name": "nvmf_tgt_poll_group_001", 00:07:46.346 "admin_qpairs": 0, 00:07:46.346 "io_qpairs": 0, 00:07:46.346 "current_admin_qpairs": 0, 00:07:46.346 "current_io_qpairs": 0, 00:07:46.346 "pending_bdev_io": 0, 00:07:46.346 "completed_nvme_io": 0, 00:07:46.346 "transports": [ 00:07:46.346 { 00:07:46.346 "trtype": "TCP" 00:07:46.346 } 00:07:46.346 ] 00:07:46.346 }, 00:07:46.346 { 00:07:46.346 "name": "nvmf_tgt_poll_group_002", 00:07:46.346 "admin_qpairs": 0, 00:07:46.346 "io_qpairs": 0, 00:07:46.346 "current_admin_qpairs": 0, 00:07:46.346 "current_io_qpairs": 0, 00:07:46.346 "pending_bdev_io": 0, 00:07:46.346 "completed_nvme_io": 0, 00:07:46.346 "transports": [ 00:07:46.346 { 00:07:46.346 "trtype": "TCP" 00:07:46.346 } 00:07:46.346 ] 00:07:46.346 }, 00:07:46.346 { 00:07:46.346 "name": "nvmf_tgt_poll_group_003", 00:07:46.346 "admin_qpairs": 0, 00:07:46.346 "io_qpairs": 0, 00:07:46.346 "current_admin_qpairs": 0, 00:07:46.346 "current_io_qpairs": 0, 00:07:46.346 "pending_bdev_io": 0, 00:07:46.346 "completed_nvme_io": 0, 00:07:46.346 "transports": [ 00:07:46.346 { 00:07:46.346 "trtype": "TCP" 00:07:46.346 } 00:07:46.346 ] 00:07:46.346 } 00:07:46.346 ] 00:07:46.346 }' 00:07:46.346 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 Malloc1 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 [2024-05-15 03:02:17.422652] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:46.347 [2024-05-15 03:02:17.422886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:46.347 [2024-05-15 03:02:17.451403] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:46.347 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:46.347 could not add new controller: failed to write to nvme-fabrics device 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.347 03:02:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.722 03:02:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.722 03:02:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:47.722 03:02:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.722 03:02:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:47.722 03:02:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:49.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:49.621 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.880 [2024-05-15 03:02:20.793129] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:49.880 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:49.880 could not add new controller: failed to write to nvme-fabrics device 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.880 03:02:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.878 03:02:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.878 03:02:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:50.878 03:02:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.878 03:02:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:50.878 03:02:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:53.410 03:02:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:53.410 03:02:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:53.410 03:02:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.410 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:53.410 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.410 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:53.410 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.411 [2024-05-15 03:02:24.273388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.411 03:02:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.347 03:02:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.347 03:02:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:54.347 03:02:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.347 03:02:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:54.347 03:02:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 [2024-05-15 03:02:27.611397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.882 03:02:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.818 03:02:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.818 03:02:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:57.818 03:02:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.818 03:02:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:57.818 03:02:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.722 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.722 [2024-05-15 03:02:30.881170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.981 03:02:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.917 03:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.917 03:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:00.917 03:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.917 03:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:00.917 03:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.450 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 [2024-05-15 03:02:34.209110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.451 03:02:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.387 03:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.388 03:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:04.388 03:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.388 03:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:04.388 03:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:06.291 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:06.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 [2024-05-15 03:02:37.536378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.550 03:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:07.925 03:02:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.925 03:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:07.925 03:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.925 03:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:07.925 03:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 [2024-05-15 03:02:40.926388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 [2024-05-15 03:02:40.974499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:09.829 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.830 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.089 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 [2024-05-15 03:02:41.026639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.089 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 [2024-05-15 03:02:41.074815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 [2024-05-15 03:02:41.122992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:10.090 "tick_rate": 2300000000, 00:08:10.090 "poll_groups": [ 00:08:10.090 { 00:08:10.090 "name": "nvmf_tgt_poll_group_000", 00:08:10.090 "admin_qpairs": 2, 00:08:10.090 "io_qpairs": 168, 00:08:10.090 "current_admin_qpairs": 0, 00:08:10.090 "current_io_qpairs": 0, 00:08:10.090 "pending_bdev_io": 0, 00:08:10.090 "completed_nvme_io": 268, 00:08:10.090 "transports": [ 00:08:10.090 { 00:08:10.090 "trtype": "TCP" 00:08:10.090 } 00:08:10.090 ] 00:08:10.090 }, 00:08:10.090 { 00:08:10.090 "name": "nvmf_tgt_poll_group_001", 00:08:10.090 "admin_qpairs": 2, 00:08:10.090 "io_qpairs": 168, 00:08:10.090 "current_admin_qpairs": 0, 00:08:10.090 "current_io_qpairs": 0, 00:08:10.090 "pending_bdev_io": 0, 00:08:10.090 "completed_nvme_io": 245, 00:08:10.090 "transports": [ 00:08:10.090 { 00:08:10.090 "trtype": "TCP" 00:08:10.090 } 00:08:10.090 ] 00:08:10.090 }, 00:08:10.090 { 00:08:10.090 "name": "nvmf_tgt_poll_group_002", 00:08:10.090 "admin_qpairs": 1, 00:08:10.090 "io_qpairs": 168, 00:08:10.090 "current_admin_qpairs": 0, 00:08:10.090 "current_io_qpairs": 0, 00:08:10.090 "pending_bdev_io": 0, 00:08:10.090 "completed_nvme_io": 291, 00:08:10.090 "transports": [ 00:08:10.090 { 00:08:10.090 "trtype": "TCP" 00:08:10.090 } 00:08:10.090 ] 00:08:10.090 }, 00:08:10.090 { 00:08:10.090 "name": "nvmf_tgt_poll_group_003", 00:08:10.090 "admin_qpairs": 2, 00:08:10.090 "io_qpairs": 168, 00:08:10.090 "current_admin_qpairs": 0, 00:08:10.090 "current_io_qpairs": 0, 00:08:10.090 "pending_bdev_io": 0, 00:08:10.090 "completed_nvme_io": 218, 00:08:10.090 "transports": [ 00:08:10.090 { 00:08:10.090 "trtype": "TCP" 00:08:10.090 } 00:08:10.090 ] 00:08:10.090 } 00:08:10.090 ] 00:08:10.090 }' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:10.090 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.349 rmmod nvme_tcp 00:08:10.349 rmmod nvme_fabrics 00:08:10.349 rmmod nvme_keyring 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 912523 ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 912523 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 912523 ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 912523 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 912523 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 912523' 00:08:10.349 killing process with pid 912523 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 912523 00:08:10.349 [2024-05-15 03:02:41.367338] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:10.349 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 912523 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.609 03:02:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.511 03:02:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.511 00:08:12.511 real 0m33.158s 00:08:12.511 user 1m41.937s 00:08:12.511 sys 0m5.980s 00:08:12.511 03:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.511 03:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.511 ************************************ 00:08:12.511 END TEST nvmf_rpc 00:08:12.511 ************************************ 00:08:12.770 03:02:43 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:12.770 03:02:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:12.770 03:02:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.770 03:02:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.770 ************************************ 00:08:12.770 START TEST nvmf_invalid 00:08:12.770 ************************************ 00:08:12.770 03:02:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:12.771 * Looking for test storage... 00:08:12.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.771 03:02:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:18.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:18.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:18.072 Found net devices under 0000:86:00.0: cvl_0_0 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:18.072 Found net devices under 0000:86:00.1: cvl_0_1 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.072 03:02:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:08:18.072 00:08:18.072 --- 10.0.0.2 ping statistics --- 00:08:18.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.072 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:08:18.072 00:08:18.072 --- 10.0.0.1 ping statistics --- 00:08:18.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.072 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:18.072 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=920285 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 920285 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 920285 ']' 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.073 03:02:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:18.332 [2024-05-15 03:02:49.269162] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:18.332 [2024-05-15 03:02:49.269203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.332 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.332 [2024-05-15 03:02:49.327803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.332 [2024-05-15 03:02:49.401633] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.332 [2024-05-15 03:02:49.401673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.332 [2024-05-15 03:02:49.401680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.332 [2024-05-15 03:02:49.401686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.332 [2024-05-15 03:02:49.401691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.332 [2024-05-15 03:02:49.401738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.332 [2024-05-15 03:02:49.401836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.332 [2024-05-15 03:02:49.401920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.332 [2024-05-15 03:02:49.401921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7669 00:08:19.267 [2024-05-15 03:02:50.268027] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:19.267 { 00:08:19.267 "nqn": "nqn.2016-06.io.spdk:cnode7669", 00:08:19.267 "tgt_name": "foobar", 00:08:19.267 "method": "nvmf_create_subsystem", 00:08:19.267 "req_id": 1 00:08:19.267 } 00:08:19.267 Got JSON-RPC error response 00:08:19.267 response: 00:08:19.267 { 00:08:19.267 "code": -32603, 00:08:19.267 "message": "Unable to find target foobar" 00:08:19.267 }' 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:19.267 { 00:08:19.267 "nqn": "nqn.2016-06.io.spdk:cnode7669", 00:08:19.267 "tgt_name": "foobar", 00:08:19.267 "method": "nvmf_create_subsystem", 00:08:19.267 "req_id": 1 00:08:19.267 } 00:08:19.267 Got JSON-RPC error response 00:08:19.267 response: 00:08:19.267 { 00:08:19.267 "code": -32603, 00:08:19.267 "message": "Unable to find target foobar" 00:08:19.267 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:19.267 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4937 00:08:19.527 [2024-05-15 03:02:50.464762] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4937: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:19.527 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:19.527 { 00:08:19.527 "nqn": "nqn.2016-06.io.spdk:cnode4937", 00:08:19.527 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:19.527 "method": "nvmf_create_subsystem", 00:08:19.527 "req_id": 1 00:08:19.527 } 00:08:19.527 Got JSON-RPC error response 00:08:19.527 response: 00:08:19.527 { 00:08:19.527 "code": -32602, 00:08:19.527 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:19.527 }' 00:08:19.527 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:19.527 { 00:08:19.527 "nqn": "nqn.2016-06.io.spdk:cnode4937", 00:08:19.527 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:19.527 "method": "nvmf_create_subsystem", 00:08:19.527 "req_id": 1 00:08:19.527 } 00:08:19.527 Got JSON-RPC error response 00:08:19.527 response: 00:08:19.527 { 00:08:19.527 "code": -32602, 00:08:19.527 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:19.527 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:19.527 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:19.527 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22218 00:08:19.527 [2024-05-15 03:02:50.661401] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22218: invalid model number 'SPDK_Controller' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:19.786 { 00:08:19.786 "nqn": "nqn.2016-06.io.spdk:cnode22218", 00:08:19.786 "model_number": "SPDK_Controller\u001f", 00:08:19.786 "method": "nvmf_create_subsystem", 00:08:19.786 "req_id": 1 00:08:19.786 } 00:08:19.786 Got JSON-RPC error response 00:08:19.786 response: 00:08:19.786 { 00:08:19.786 "code": -32602, 00:08:19.786 "message": "Invalid MN SPDK_Controller\u001f" 00:08:19.786 }' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:19.786 { 00:08:19.786 "nqn": "nqn.2016-06.io.spdk:cnode22218", 00:08:19.786 "model_number": "SPDK_Controller\u001f", 00:08:19.786 "method": "nvmf_create_subsystem", 00:08:19.786 "req_id": 1 00:08:19.786 } 00:08:19.786 Got JSON-RPC error response 00:08:19.786 response: 00:08:19.786 { 00:08:19.786 "code": -32602, 00:08:19.786 "message": "Invalid MN SPDK_Controller\u001f" 00:08:19.786 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.786 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '=M. /Hl/k-EoQ!KY:*ee-' 00:08:19.787 03:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=M. /Hl/k-EoQ!KY:*ee-' nqn.2016-06.io.spdk:cnode6068 00:08:20.046 [2024-05-15 03:02:50.982485] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6068: invalid serial number '=M. /Hl/k-EoQ!KY:*ee-' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:20.046 { 00:08:20.046 "nqn": "nqn.2016-06.io.spdk:cnode6068", 00:08:20.046 "serial_number": "=M. /Hl/k-EoQ!KY:*ee-", 00:08:20.046 "method": "nvmf_create_subsystem", 00:08:20.046 "req_id": 1 00:08:20.046 } 00:08:20.046 Got JSON-RPC error response 00:08:20.046 response: 00:08:20.046 { 00:08:20.046 "code": -32602, 00:08:20.046 "message": "Invalid SN =M. /Hl/k-EoQ!KY:*ee-" 00:08:20.046 }' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:20.046 { 00:08:20.046 "nqn": "nqn.2016-06.io.spdk:cnode6068", 00:08:20.046 "serial_number": "=M. /Hl/k-EoQ!KY:*ee-", 00:08:20.046 "method": "nvmf_create_subsystem", 00:08:20.046 "req_id": 1 00:08:20.046 } 00:08:20.046 Got JSON-RPC error response 00:08:20.046 response: 00:08:20.046 { 00:08:20.046 "code": -32602, 00:08:20.046 "message": "Invalid SN =M. /Hl/k-EoQ!KY:*ee-" 00:08:20.046 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.046 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.047 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:08:20.304 03:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ']"s6Kq4q;eC+\$aSh)ST /dev/null' 00:08:22.376 03:02:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.282 03:02:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.282 00:08:24.282 real 0m11.700s 00:08:24.282 user 0m19.677s 00:08:24.282 sys 0m4.834s 00:08:24.282 03:02:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.282 03:02:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:24.282 ************************************ 00:08:24.282 END TEST nvmf_invalid 00:08:24.282 ************************************ 00:08:24.542 03:02:55 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:24.542 03:02:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.542 03:02:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.542 03:02:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.542 ************************************ 00:08:24.542 START TEST nvmf_abort 00:08:24.542 ************************************ 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:24.542 * Looking for test storage... 00:08:24.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.542 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.543 03:02:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.819 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.820 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.820 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.820 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:08:29.820 00:08:29.820 --- 10.0.0.2 ping statistics --- 00:08:29.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.820 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:29.820 00:08:29.820 --- 10.0.0.1 ping statistics --- 00:08:29.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.820 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:29.820 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=924583 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 924583 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 924583 ']' 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.080 03:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.080 [2024-05-15 03:03:01.025682] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:30.080 [2024-05-15 03:03:01.025727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.080 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.080 [2024-05-15 03:03:01.084435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.080 [2024-05-15 03:03:01.157847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.080 [2024-05-15 03:03:01.157890] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.080 [2024-05-15 03:03:01.157898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.080 [2024-05-15 03:03:01.157904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.080 [2024-05-15 03:03:01.157912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.080 [2024-05-15 03:03:01.157952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.080 [2024-05-15 03:03:01.158037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.080 [2024-05-15 03:03:01.158039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 [2024-05-15 03:03:01.882516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 Malloc0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 Delay0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.017 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.018 [2024-05-15 03:03:01.953095] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:31.018 [2024-05-15 03:03:01.953335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.018 03:03:01 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:31.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.018 [2024-05-15 03:03:02.065212] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:33.549 Initializing NVMe Controllers 00:08:33.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.549 controller IO queue size 128 less than required 00:08:33.549 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:33.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:33.549 Initialization complete. Launching workers. 00:08:33.549 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43596 00:08:33.549 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43657, failed to submit 62 00:08:33.549 success 43600, unsuccess 57, failed 0 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.549 rmmod nvme_tcp 00:08:33.549 rmmod nvme_fabrics 00:08:33.549 rmmod nvme_keyring 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 924583 ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 924583 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 924583 ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 924583 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 924583 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 924583' 00:08:33.549 killing process with pid 924583 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 924583 00:08:33.549 [2024-05-15 03:03:04.424823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 924583 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.549 03:03:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.084 03:03:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.084 00:08:36.084 real 0m11.210s 00:08:36.084 user 0m13.761s 00:08:36.084 sys 0m4.947s 00:08:36.084 03:03:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.084 03:03:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 ************************************ 00:08:36.084 END TEST nvmf_abort 00:08:36.084 ************************************ 00:08:36.084 03:03:06 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:36.084 03:03:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:36.084 03:03:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.084 03:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 ************************************ 00:08:36.084 START TEST nvmf_ns_hotplug_stress 00:08:36.084 ************************************ 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:36.084 * Looking for test storage... 00:08:36.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.084 03:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.420 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.420 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.420 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.420 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.420 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:41.421 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:41.421 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:41.421 Found net devices under 0000:86:00.0: cvl_0_0 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:41.421 Found net devices under 0000:86:00.1: cvl_0_1 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:41.421 00:08:41.421 --- 10.0.0.2 ping statistics --- 00:08:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.421 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:41.421 00:08:41.421 --- 10.0.0.1 ping statistics --- 00:08:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.421 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=929038 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 929038 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 929038 ']' 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.421 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:41.422 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:41.422 03:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.422 [2024-05-15 03:03:11.949671] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:08:41.422 [2024-05-15 03:03:11.949716] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.422 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.422 [2024-05-15 03:03:12.008094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.422 [2024-05-15 03:03:12.086851] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.422 [2024-05-15 03:03:12.086886] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.422 [2024-05-15 03:03:12.086893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.422 [2024-05-15 03:03:12.086899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.422 [2024-05-15 03:03:12.086904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.422 [2024-05-15 03:03:12.086999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.422 [2024-05-15 03:03:12.087018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.422 [2024-05-15 03:03:12.087019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.680 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.680 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:08:41.680 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.680 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.681 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.681 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.681 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:41.681 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:41.939 [2024-05-15 03:03:12.946909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.939 03:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.198 03:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.198 [2024-05-15 03:03:13.304039] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:42.198 [2024-05-15 03:03:13.304243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.198 03:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.457 03:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:42.715 Malloc0 00:08:42.715 03:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:42.715 Delay0 00:08:42.972 03:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.972 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:43.230 NULL1 00:08:43.230 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:43.487 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:43.487 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=929520 00:08:43.487 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:43.487 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.487 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.487 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.745 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:43.745 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:44.005 true 00:08:44.005 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:44.005 03:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.264 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.264 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:44.264 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:44.522 true 00:08:44.522 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:44.522 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.781 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.781 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:44.781 03:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:45.040 true 00:08:45.040 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:45.040 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.299 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.557 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:45.557 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:45.557 true 00:08:45.557 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:45.557 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.815 03:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.074 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:46.074 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:46.074 true 00:08:46.332 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:46.332 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.332 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.591 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:46.591 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:46.850 true 00:08:46.850 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:46.850 03:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.108 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.109 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:47.109 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:47.368 true 00:08:47.368 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:47.368 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.626 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.626 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:48.010 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:48.010 true 00:08:48.010 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:48.010 03:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.010 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.269 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:48.269 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:48.527 true 00:08:48.527 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:48.528 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.786 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.786 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:48.786 03:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:49.044 true 00:08:49.044 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:49.044 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.303 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.562 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:49.562 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:49.562 true 00:08:49.562 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:49.562 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.821 03:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.079 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:50.079 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:50.336 true 00:08:50.336 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:50.336 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.336 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.593 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:50.593 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:50.850 true 00:08:50.850 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:50.850 03:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.107 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.107 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:51.107 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:51.365 true 00:08:51.365 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:51.365 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.623 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.880 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:51.880 03:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:51.880 true 00:08:51.880 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:51.880 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.138 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.396 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:52.396 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:52.653 true 00:08:52.653 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:52.653 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.653 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.911 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:52.911 03:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:53.169 true 00:08:53.169 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:53.169 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.427 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.427 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:53.427 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:53.686 true 00:08:53.686 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:53.686 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.943 03:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.201 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:54.201 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:54.201 true 00:08:54.201 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:54.201 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.458 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.716 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:54.716 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:54.973 true 00:08:54.973 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:54.973 03:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.232 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.232 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:55.232 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:55.490 true 00:08:55.490 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:55.490 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.748 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.006 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:56.006 03:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:56.006 true 00:08:56.006 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:56.006 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.264 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.523 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:56.523 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:56.782 true 00:08:56.782 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:56.782 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.039 03:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.039 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:57.039 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:57.297 true 00:08:57.297 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:57.297 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.555 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.813 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:57.813 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:57.813 true 00:08:57.813 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:57.813 03:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.071 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.329 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:58.329 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:58.587 true 00:08:58.587 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:58.587 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.846 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.846 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:58.846 03:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:59.105 true 00:08:59.105 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:59.105 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.364 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.623 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:59.623 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:59.623 true 00:08:59.882 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:08:59.882 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.882 03:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.141 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:00.141 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:00.400 true 00:09:00.400 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:00.400 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.659 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.659 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:00.659 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:00.918 true 00:09:00.918 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:00.918 03:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.177 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.437 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:01.437 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:01.437 true 00:09:01.437 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:01.437 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.695 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.953 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:01.953 03:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:02.212 true 00:09:02.212 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:02.212 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.470 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.470 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:02.470 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:02.728 true 00:09:02.728 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:02.728 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.987 03:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.246 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:03.246 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:03.246 true 00:09:03.246 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:03.246 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.505 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.764 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:03.764 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:04.022 true 00:09:04.022 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:04.023 03:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.023 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.281 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:04.281 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:04.541 true 00:09:04.541 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:04.541 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.800 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.059 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:05.059 03:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:05.059 true 00:09:05.059 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:05.060 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.319 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.578 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:05.578 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:05.578 true 00:09:05.837 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:05.837 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.837 03:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.096 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:06.096 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:06.355 true 00:09:06.355 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:06.355 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.614 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.614 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:06.614 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:06.872 true 00:09:06.872 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:06.872 03:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.131 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.390 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:07.390 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:07.390 true 00:09:07.649 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:07.649 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.649 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.908 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:07.908 03:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:08.168 true 00:09:08.168 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:08.168 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.427 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.427 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:08.427 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:08.685 true 00:09:08.685 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:08.685 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.945 03:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.204 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:09.204 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:09.204 true 00:09:09.204 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:09.204 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.463 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.721 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:09.721 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:09.980 true 00:09:09.980 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:09.980 03:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.239 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.239 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:10.239 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:10.498 true 00:09:10.498 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:10.498 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.757 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.016 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:11.016 03:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:11.016 true 00:09:11.016 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:11.016 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.275 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.534 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:11.534 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:11.793 true 00:09:11.793 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:11.793 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.793 03:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.052 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:12.052 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:12.311 true 00:09:12.311 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:12.311 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.569 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.828 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:12.828 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:12.828 true 00:09:12.828 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:12.828 03:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.087 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.345 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:13.345 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:13.345 true 00:09:13.604 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:13.604 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.604 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.864 Initializing NVMe Controllers 00:09:13.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.864 Controller IO queue size 128, less than required. 00:09:13.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:13.864 Initialization complete. Launching workers. 00:09:13.864 ======================================================== 00:09:13.864 Latency(us) 00:09:13.864 Device Information : IOPS MiB/s Average min max 00:09:13.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26211.03 12.80 4883.37 2450.35 44111.03 00:09:13.864 ======================================================== 00:09:13.864 Total : 26211.03 12.80 4883.37 2450.35 44111.03 00:09:13.864 00:09:13.864 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:13.864 03:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:14.133 true 00:09:14.133 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 929520 00:09:14.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (929520) - No such process 00:09:14.133 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 929520 00:09:14.133 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.133 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.447 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:14.447 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:14.447 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:14.448 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.448 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:14.732 null0 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:14.732 null1 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.732 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:14.992 null2 00:09:14.992 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.992 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.992 03:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:14.992 null3 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:15.251 null4 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.251 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:15.510 null5 00:09:15.510 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.510 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.510 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:15.770 null6 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:15.770 null7 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 935092 935094 935097 935101 935104 935107 935110 935112 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.770 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.771 03:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.030 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.289 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.548 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.807 03:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.067 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.327 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.328 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.328 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.328 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.328 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.616 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.874 03:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.874 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.874 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.132 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.389 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.842 03:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.101 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.358 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.358 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.359 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.617 rmmod nvme_tcp 00:09:19.617 rmmod nvme_fabrics 00:09:19.617 rmmod nvme_keyring 00:09:19.617 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 929038 ']' 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 929038 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 929038 ']' 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 929038 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 929038 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 929038' 00:09:19.876 killing process with pid 929038 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 929038 00:09:19.876 [2024-05-15 03:03:50.815266] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.876 03:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 929038 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.876 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.877 03:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.415 03:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.415 00:09:22.415 real 0m46.311s 00:09:22.415 user 3m18.124s 00:09:22.415 sys 0m16.374s 00:09:22.415 03:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.415 03:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.415 ************************************ 00:09:22.415 END TEST nvmf_ns_hotplug_stress 00:09:22.415 ************************************ 00:09:22.415 03:03:53 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:22.415 03:03:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:22.415 03:03:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.415 03:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.415 ************************************ 00:09:22.415 START TEST nvmf_connect_stress 00:09:22.415 ************************************ 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:22.415 * Looking for test storage... 00:09:22.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.415 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.416 03:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.687 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.688 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.688 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.688 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:09:27.688 00:09:27.688 --- 10.0.0.2 ping statistics --- 00:09:27.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.688 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:09:27.688 00:09:27.688 --- 10.0.0.1 ping statistics --- 00:09:27.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.688 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=939328 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 939328 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 939328 ']' 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.688 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:27.689 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.689 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:27.689 03:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 [2024-05-15 03:03:58.711230] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:27.689 [2024-05-15 03:03:58.711270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.689 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.689 [2024-05-15 03:03:58.769068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.689 [2024-05-15 03:03:58.847308] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.689 [2024-05-15 03:03:58.847345] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.689 [2024-05-15 03:03:58.847352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.689 [2024-05-15 03:03:58.847358] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.689 [2024-05-15 03:03:58.847362] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.689 [2024-05-15 03:03:58.847451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.689 [2024-05-15 03:03:58.847551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.689 [2024-05-15 03:03:58.847553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.626 [2024-05-15 03:03:59.556466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.626 [2024-05-15 03:03:59.580490] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:28.626 [2024-05-15 03:03:59.587554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.626 NULL1 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=939574 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.626 03:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.884 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.884 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:28.884 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.884 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.884 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.452 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.452 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:29.452 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.452 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.452 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.711 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.711 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:29.711 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.711 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.711 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.970 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.970 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:29.970 03:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.970 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.970 03:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.229 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:30.229 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.229 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.229 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.486 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.486 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:30.486 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.486 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.486 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.052 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.052 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:31.052 03:04:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.052 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.052 03:04:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.311 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.311 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:31.311 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.311 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.311 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.569 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.569 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:31.569 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.569 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.569 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.827 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.827 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:31.827 03:04:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.827 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.827 03:04:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.394 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.394 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:32.394 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.394 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.394 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.652 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.652 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:32.652 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.652 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.652 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.935 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.935 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:32.935 03:04:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.935 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.935 03:04:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.197 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.197 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:33.197 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.197 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.197 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.455 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.455 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:33.455 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.455 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.455 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.022 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.022 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:34.022 03:04:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.022 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.022 03:04:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.280 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.280 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:34.280 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.280 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.280 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.538 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.538 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:34.538 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.538 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.538 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.797 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.797 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:34.797 03:04:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.797 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.797 03:04:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.055 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.055 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:35.055 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.055 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.055 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.621 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.621 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:35.621 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.621 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.621 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:35.879 03:04:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.879 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 03:04:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:36.137 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.137 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.395 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.395 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:36.395 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.395 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.395 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.653 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.653 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:36.653 03:04:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.653 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.653 03:04:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.219 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.219 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:37.219 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.219 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.219 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.477 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.477 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:37.477 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.477 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.477 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.736 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.736 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:37.736 03:04:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.736 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.736 03:04:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.995 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.995 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:37.995 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.995 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.995 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.561 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.561 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:38.561 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.561 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.561 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.820 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 939574 00:09:38.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (939574) - No such process 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 939574 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.820 rmmod nvme_tcp 00:09:38.820 rmmod nvme_fabrics 00:09:38.820 rmmod nvme_keyring 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 939328 ']' 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 939328 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 939328 ']' 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 939328 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 939328 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 939328' 00:09:38.820 killing process with pid 939328 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 939328 00:09:38.820 [2024-05-15 03:04:09.864809] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:38.820 03:04:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 939328 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.079 03:04:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.985 03:04:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.985 00:09:40.985 real 0m18.967s 00:09:40.985 user 0m40.967s 00:09:40.985 sys 0m8.056s 00:09:40.985 03:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.985 03:04:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.985 ************************************ 00:09:40.985 END TEST nvmf_connect_stress 00:09:40.985 ************************************ 00:09:41.244 03:04:12 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:41.244 03:04:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:41.244 03:04:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.244 03:04:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 ************************************ 00:09:41.244 START TEST nvmf_fused_ordering 00:09:41.244 ************************************ 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:41.244 * Looking for test storage... 00:09:41.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.244 03:04:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.245 03:04:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.516 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:46.776 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:46.776 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:46.776 Found net devices under 0000:86:00.0: cvl_0_0 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:46.776 Found net devices under 0000:86:00.1: cvl_0_1 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.776 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:46.777 00:09:46.777 --- 10.0.0.2 ping statistics --- 00:09:46.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.777 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:09:46.777 00:09:46.777 --- 10.0.0.1 ping statistics --- 00:09:46.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.777 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.777 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.035 03:04:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:47.035 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.035 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:47.035 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.035 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=944727 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 944727 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 944727 ']' 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:47.036 03:04:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.036 [2024-05-15 03:04:18.018336] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:47.036 [2024-05-15 03:04:18.018377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.036 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.036 [2024-05-15 03:04:18.074474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.036 [2024-05-15 03:04:18.149350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.036 [2024-05-15 03:04:18.149389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.036 [2024-05-15 03:04:18.149396] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.036 [2024-05-15 03:04:18.149402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.036 [2024-05-15 03:04:18.149408] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.036 [2024-05-15 03:04:18.149433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-05-15 03:04:18.844925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-05-15 03:04:18.868938] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:47.972 [2024-05-15 03:04:18.869129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 NULL1 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 03:04:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.973 03:04:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:47.973 [2024-05-15 03:04:18.921925] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:47.973 [2024-05-15 03:04:18.921959] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944970 ] 00:09:47.973 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.231 Attached to nqn.2016-06.io.spdk:cnode1 00:09:48.231 Namespace ID: 1 size: 1GB 00:09:48.231 fused_ordering(0) 00:09:48.231 fused_ordering(1) 00:09:48.231 fused_ordering(2) 00:09:48.231 fused_ordering(3) 00:09:48.231 fused_ordering(4) 00:09:48.231 fused_ordering(5) 00:09:48.231 fused_ordering(6) 00:09:48.231 fused_ordering(7) 00:09:48.231 fused_ordering(8) 00:09:48.231 fused_ordering(9) 00:09:48.231 fused_ordering(10) 00:09:48.231 fused_ordering(11) 00:09:48.231 fused_ordering(12) 00:09:48.231 fused_ordering(13) 00:09:48.231 fused_ordering(14) 00:09:48.231 fused_ordering(15) 00:09:48.231 fused_ordering(16) 00:09:48.231 fused_ordering(17) 00:09:48.231 fused_ordering(18) 00:09:48.231 fused_ordering(19) 00:09:48.231 fused_ordering(20) 00:09:48.231 fused_ordering(21) 00:09:48.231 fused_ordering(22) 00:09:48.231 fused_ordering(23) 00:09:48.231 fused_ordering(24) 00:09:48.231 fused_ordering(25) 00:09:48.231 fused_ordering(26) 00:09:48.231 fused_ordering(27) 00:09:48.231 fused_ordering(28) 00:09:48.231 fused_ordering(29) 00:09:48.231 fused_ordering(30) 00:09:48.231 fused_ordering(31) 00:09:48.231 fused_ordering(32) 00:09:48.231 fused_ordering(33) 00:09:48.231 fused_ordering(34) 00:09:48.231 fused_ordering(35) 00:09:48.231 fused_ordering(36) 00:09:48.231 fused_ordering(37) 00:09:48.231 fused_ordering(38) 00:09:48.231 fused_ordering(39) 00:09:48.231 fused_ordering(40) 00:09:48.231 fused_ordering(41) 00:09:48.231 fused_ordering(42) 00:09:48.231 fused_ordering(43) 00:09:48.231 fused_ordering(44) 00:09:48.231 fused_ordering(45) 00:09:48.231 fused_ordering(46) 00:09:48.231 fused_ordering(47) 00:09:48.231 fused_ordering(48) 00:09:48.231 fused_ordering(49) 00:09:48.231 fused_ordering(50) 00:09:48.231 fused_ordering(51) 00:09:48.231 fused_ordering(52) 00:09:48.231 fused_ordering(53) 00:09:48.231 fused_ordering(54) 00:09:48.231 fused_ordering(55) 00:09:48.231 fused_ordering(56) 00:09:48.231 fused_ordering(57) 00:09:48.231 fused_ordering(58) 00:09:48.231 fused_ordering(59) 00:09:48.231 fused_ordering(60) 00:09:48.231 fused_ordering(61) 00:09:48.231 fused_ordering(62) 00:09:48.231 fused_ordering(63) 00:09:48.231 fused_ordering(64) 00:09:48.231 fused_ordering(65) 00:09:48.231 fused_ordering(66) 00:09:48.231 fused_ordering(67) 00:09:48.231 fused_ordering(68) 00:09:48.231 fused_ordering(69) 00:09:48.231 fused_ordering(70) 00:09:48.231 fused_ordering(71) 00:09:48.231 fused_ordering(72) 00:09:48.231 fused_ordering(73) 00:09:48.231 fused_ordering(74) 00:09:48.231 fused_ordering(75) 00:09:48.231 fused_ordering(76) 00:09:48.231 fused_ordering(77) 00:09:48.231 fused_ordering(78) 00:09:48.231 fused_ordering(79) 00:09:48.231 fused_ordering(80) 00:09:48.231 fused_ordering(81) 00:09:48.231 fused_ordering(82) 00:09:48.231 fused_ordering(83) 00:09:48.231 fused_ordering(84) 00:09:48.231 fused_ordering(85) 00:09:48.231 fused_ordering(86) 00:09:48.231 fused_ordering(87) 00:09:48.231 fused_ordering(88) 00:09:48.231 fused_ordering(89) 00:09:48.231 fused_ordering(90) 00:09:48.231 fused_ordering(91) 00:09:48.231 fused_ordering(92) 00:09:48.231 fused_ordering(93) 00:09:48.231 fused_ordering(94) 00:09:48.231 fused_ordering(95) 00:09:48.231 fused_ordering(96) 00:09:48.231 fused_ordering(97) 00:09:48.231 fused_ordering(98) 00:09:48.231 fused_ordering(99) 00:09:48.231 fused_ordering(100) 00:09:48.231 fused_ordering(101) 00:09:48.231 fused_ordering(102) 00:09:48.231 fused_ordering(103) 00:09:48.231 fused_ordering(104) 00:09:48.231 fused_ordering(105) 00:09:48.231 fused_ordering(106) 00:09:48.231 fused_ordering(107) 00:09:48.231 fused_ordering(108) 00:09:48.231 fused_ordering(109) 00:09:48.231 fused_ordering(110) 00:09:48.231 fused_ordering(111) 00:09:48.231 fused_ordering(112) 00:09:48.231 fused_ordering(113) 00:09:48.231 fused_ordering(114) 00:09:48.231 fused_ordering(115) 00:09:48.231 fused_ordering(116) 00:09:48.231 fused_ordering(117) 00:09:48.231 fused_ordering(118) 00:09:48.231 fused_ordering(119) 00:09:48.231 fused_ordering(120) 00:09:48.231 fused_ordering(121) 00:09:48.231 fused_ordering(122) 00:09:48.231 fused_ordering(123) 00:09:48.231 fused_ordering(124) 00:09:48.231 fused_ordering(125) 00:09:48.231 fused_ordering(126) 00:09:48.231 fused_ordering(127) 00:09:48.231 fused_ordering(128) 00:09:48.231 fused_ordering(129) 00:09:48.231 fused_ordering(130) 00:09:48.231 fused_ordering(131) 00:09:48.231 fused_ordering(132) 00:09:48.231 fused_ordering(133) 00:09:48.231 fused_ordering(134) 00:09:48.231 fused_ordering(135) 00:09:48.231 fused_ordering(136) 00:09:48.231 fused_ordering(137) 00:09:48.231 fused_ordering(138) 00:09:48.231 fused_ordering(139) 00:09:48.231 fused_ordering(140) 00:09:48.231 fused_ordering(141) 00:09:48.231 fused_ordering(142) 00:09:48.231 fused_ordering(143) 00:09:48.231 fused_ordering(144) 00:09:48.231 fused_ordering(145) 00:09:48.231 fused_ordering(146) 00:09:48.231 fused_ordering(147) 00:09:48.232 fused_ordering(148) 00:09:48.232 fused_ordering(149) 00:09:48.232 fused_ordering(150) 00:09:48.232 fused_ordering(151) 00:09:48.232 fused_ordering(152) 00:09:48.232 fused_ordering(153) 00:09:48.232 fused_ordering(154) 00:09:48.232 fused_ordering(155) 00:09:48.232 fused_ordering(156) 00:09:48.232 fused_ordering(157) 00:09:48.232 fused_ordering(158) 00:09:48.232 fused_ordering(159) 00:09:48.232 fused_ordering(160) 00:09:48.232 fused_ordering(161) 00:09:48.232 fused_ordering(162) 00:09:48.232 fused_ordering(163) 00:09:48.232 fused_ordering(164) 00:09:48.232 fused_ordering(165) 00:09:48.232 fused_ordering(166) 00:09:48.232 fused_ordering(167) 00:09:48.232 fused_ordering(168) 00:09:48.232 fused_ordering(169) 00:09:48.232 fused_ordering(170) 00:09:48.232 fused_ordering(171) 00:09:48.232 fused_ordering(172) 00:09:48.232 fused_ordering(173) 00:09:48.232 fused_ordering(174) 00:09:48.232 fused_ordering(175) 00:09:48.232 fused_ordering(176) 00:09:48.232 fused_ordering(177) 00:09:48.232 fused_ordering(178) 00:09:48.232 fused_ordering(179) 00:09:48.232 fused_ordering(180) 00:09:48.232 fused_ordering(181) 00:09:48.232 fused_ordering(182) 00:09:48.232 fused_ordering(183) 00:09:48.232 fused_ordering(184) 00:09:48.232 fused_ordering(185) 00:09:48.232 fused_ordering(186) 00:09:48.232 fused_ordering(187) 00:09:48.232 fused_ordering(188) 00:09:48.232 fused_ordering(189) 00:09:48.232 fused_ordering(190) 00:09:48.232 fused_ordering(191) 00:09:48.232 fused_ordering(192) 00:09:48.232 fused_ordering(193) 00:09:48.232 fused_ordering(194) 00:09:48.232 fused_ordering(195) 00:09:48.232 fused_ordering(196) 00:09:48.232 fused_ordering(197) 00:09:48.232 fused_ordering(198) 00:09:48.232 fused_ordering(199) 00:09:48.232 fused_ordering(200) 00:09:48.232 fused_ordering(201) 00:09:48.232 fused_ordering(202) 00:09:48.232 fused_ordering(203) 00:09:48.232 fused_ordering(204) 00:09:48.232 fused_ordering(205) 00:09:48.490 fused_ordering(206) 00:09:48.490 fused_ordering(207) 00:09:48.490 fused_ordering(208) 00:09:48.490 fused_ordering(209) 00:09:48.490 fused_ordering(210) 00:09:48.490 fused_ordering(211) 00:09:48.490 fused_ordering(212) 00:09:48.490 fused_ordering(213) 00:09:48.490 fused_ordering(214) 00:09:48.490 fused_ordering(215) 00:09:48.490 fused_ordering(216) 00:09:48.490 fused_ordering(217) 00:09:48.490 fused_ordering(218) 00:09:48.490 fused_ordering(219) 00:09:48.490 fused_ordering(220) 00:09:48.490 fused_ordering(221) 00:09:48.490 fused_ordering(222) 00:09:48.490 fused_ordering(223) 00:09:48.490 fused_ordering(224) 00:09:48.490 fused_ordering(225) 00:09:48.490 fused_ordering(226) 00:09:48.490 fused_ordering(227) 00:09:48.490 fused_ordering(228) 00:09:48.490 fused_ordering(229) 00:09:48.490 fused_ordering(230) 00:09:48.490 fused_ordering(231) 00:09:48.490 fused_ordering(232) 00:09:48.490 fused_ordering(233) 00:09:48.490 fused_ordering(234) 00:09:48.490 fused_ordering(235) 00:09:48.490 fused_ordering(236) 00:09:48.490 fused_ordering(237) 00:09:48.490 fused_ordering(238) 00:09:48.490 fused_ordering(239) 00:09:48.490 fused_ordering(240) 00:09:48.490 fused_ordering(241) 00:09:48.490 fused_ordering(242) 00:09:48.490 fused_ordering(243) 00:09:48.490 fused_ordering(244) 00:09:48.490 fused_ordering(245) 00:09:48.490 fused_ordering(246) 00:09:48.490 fused_ordering(247) 00:09:48.490 fused_ordering(248) 00:09:48.490 fused_ordering(249) 00:09:48.490 fused_ordering(250) 00:09:48.490 fused_ordering(251) 00:09:48.490 fused_ordering(252) 00:09:48.490 fused_ordering(253) 00:09:48.490 fused_ordering(254) 00:09:48.490 fused_ordering(255) 00:09:48.490 fused_ordering(256) 00:09:48.490 fused_ordering(257) 00:09:48.490 fused_ordering(258) 00:09:48.490 fused_ordering(259) 00:09:48.490 fused_ordering(260) 00:09:48.490 fused_ordering(261) 00:09:48.490 fused_ordering(262) 00:09:48.490 fused_ordering(263) 00:09:48.490 fused_ordering(264) 00:09:48.490 fused_ordering(265) 00:09:48.490 fused_ordering(266) 00:09:48.490 fused_ordering(267) 00:09:48.490 fused_ordering(268) 00:09:48.490 fused_ordering(269) 00:09:48.490 fused_ordering(270) 00:09:48.490 fused_ordering(271) 00:09:48.490 fused_ordering(272) 00:09:48.490 fused_ordering(273) 00:09:48.490 fused_ordering(274) 00:09:48.490 fused_ordering(275) 00:09:48.490 fused_ordering(276) 00:09:48.490 fused_ordering(277) 00:09:48.490 fused_ordering(278) 00:09:48.490 fused_ordering(279) 00:09:48.490 fused_ordering(280) 00:09:48.490 fused_ordering(281) 00:09:48.490 fused_ordering(282) 00:09:48.490 fused_ordering(283) 00:09:48.490 fused_ordering(284) 00:09:48.490 fused_ordering(285) 00:09:48.490 fused_ordering(286) 00:09:48.490 fused_ordering(287) 00:09:48.490 fused_ordering(288) 00:09:48.490 fused_ordering(289) 00:09:48.490 fused_ordering(290) 00:09:48.490 fused_ordering(291) 00:09:48.490 fused_ordering(292) 00:09:48.490 fused_ordering(293) 00:09:48.490 fused_ordering(294) 00:09:48.490 fused_ordering(295) 00:09:48.490 fused_ordering(296) 00:09:48.490 fused_ordering(297) 00:09:48.490 fused_ordering(298) 00:09:48.490 fused_ordering(299) 00:09:48.490 fused_ordering(300) 00:09:48.490 fused_ordering(301) 00:09:48.490 fused_ordering(302) 00:09:48.490 fused_ordering(303) 00:09:48.490 fused_ordering(304) 00:09:48.490 fused_ordering(305) 00:09:48.490 fused_ordering(306) 00:09:48.490 fused_ordering(307) 00:09:48.490 fused_ordering(308) 00:09:48.490 fused_ordering(309) 00:09:48.490 fused_ordering(310) 00:09:48.490 fused_ordering(311) 00:09:48.490 fused_ordering(312) 00:09:48.490 fused_ordering(313) 00:09:48.490 fused_ordering(314) 00:09:48.490 fused_ordering(315) 00:09:48.490 fused_ordering(316) 00:09:48.490 fused_ordering(317) 00:09:48.490 fused_ordering(318) 00:09:48.490 fused_ordering(319) 00:09:48.490 fused_ordering(320) 00:09:48.490 fused_ordering(321) 00:09:48.490 fused_ordering(322) 00:09:48.490 fused_ordering(323) 00:09:48.490 fused_ordering(324) 00:09:48.490 fused_ordering(325) 00:09:48.490 fused_ordering(326) 00:09:48.490 fused_ordering(327) 00:09:48.490 fused_ordering(328) 00:09:48.490 fused_ordering(329) 00:09:48.490 fused_ordering(330) 00:09:48.490 fused_ordering(331) 00:09:48.490 fused_ordering(332) 00:09:48.490 fused_ordering(333) 00:09:48.490 fused_ordering(334) 00:09:48.491 fused_ordering(335) 00:09:48.491 fused_ordering(336) 00:09:48.491 fused_ordering(337) 00:09:48.491 fused_ordering(338) 00:09:48.491 fused_ordering(339) 00:09:48.491 fused_ordering(340) 00:09:48.491 fused_ordering(341) 00:09:48.491 fused_ordering(342) 00:09:48.491 fused_ordering(343) 00:09:48.491 fused_ordering(344) 00:09:48.491 fused_ordering(345) 00:09:48.491 fused_ordering(346) 00:09:48.491 fused_ordering(347) 00:09:48.491 fused_ordering(348) 00:09:48.491 fused_ordering(349) 00:09:48.491 fused_ordering(350) 00:09:48.491 fused_ordering(351) 00:09:48.491 fused_ordering(352) 00:09:48.491 fused_ordering(353) 00:09:48.491 fused_ordering(354) 00:09:48.491 fused_ordering(355) 00:09:48.491 fused_ordering(356) 00:09:48.491 fused_ordering(357) 00:09:48.491 fused_ordering(358) 00:09:48.491 fused_ordering(359) 00:09:48.491 fused_ordering(360) 00:09:48.491 fused_ordering(361) 00:09:48.491 fused_ordering(362) 00:09:48.491 fused_ordering(363) 00:09:48.491 fused_ordering(364) 00:09:48.491 fused_ordering(365) 00:09:48.491 fused_ordering(366) 00:09:48.491 fused_ordering(367) 00:09:48.491 fused_ordering(368) 00:09:48.491 fused_ordering(369) 00:09:48.491 fused_ordering(370) 00:09:48.491 fused_ordering(371) 00:09:48.491 fused_ordering(372) 00:09:48.491 fused_ordering(373) 00:09:48.491 fused_ordering(374) 00:09:48.491 fused_ordering(375) 00:09:48.491 fused_ordering(376) 00:09:48.491 fused_ordering(377) 00:09:48.491 fused_ordering(378) 00:09:48.491 fused_ordering(379) 00:09:48.491 fused_ordering(380) 00:09:48.491 fused_ordering(381) 00:09:48.491 fused_ordering(382) 00:09:48.491 fused_ordering(383) 00:09:48.491 fused_ordering(384) 00:09:48.491 fused_ordering(385) 00:09:48.491 fused_ordering(386) 00:09:48.491 fused_ordering(387) 00:09:48.491 fused_ordering(388) 00:09:48.491 fused_ordering(389) 00:09:48.491 fused_ordering(390) 00:09:48.491 fused_ordering(391) 00:09:48.491 fused_ordering(392) 00:09:48.491 fused_ordering(393) 00:09:48.491 fused_ordering(394) 00:09:48.491 fused_ordering(395) 00:09:48.491 fused_ordering(396) 00:09:48.491 fused_ordering(397) 00:09:48.491 fused_ordering(398) 00:09:48.491 fused_ordering(399) 00:09:48.491 fused_ordering(400) 00:09:48.491 fused_ordering(401) 00:09:48.491 fused_ordering(402) 00:09:48.491 fused_ordering(403) 00:09:48.491 fused_ordering(404) 00:09:48.491 fused_ordering(405) 00:09:48.491 fused_ordering(406) 00:09:48.491 fused_ordering(407) 00:09:48.491 fused_ordering(408) 00:09:48.491 fused_ordering(409) 00:09:48.491 fused_ordering(410) 00:09:48.748 fused_ordering(411) 00:09:48.748 fused_ordering(412) 00:09:48.748 fused_ordering(413) 00:09:48.748 fused_ordering(414) 00:09:48.748 fused_ordering(415) 00:09:48.748 fused_ordering(416) 00:09:48.748 fused_ordering(417) 00:09:48.748 fused_ordering(418) 00:09:48.748 fused_ordering(419) 00:09:48.749 fused_ordering(420) 00:09:48.749 fused_ordering(421) 00:09:48.749 fused_ordering(422) 00:09:48.749 fused_ordering(423) 00:09:48.749 fused_ordering(424) 00:09:48.749 fused_ordering(425) 00:09:48.749 fused_ordering(426) 00:09:48.749 fused_ordering(427) 00:09:48.749 fused_ordering(428) 00:09:48.749 fused_ordering(429) 00:09:48.749 fused_ordering(430) 00:09:48.749 fused_ordering(431) 00:09:48.749 fused_ordering(432) 00:09:48.749 fused_ordering(433) 00:09:48.749 fused_ordering(434) 00:09:48.749 fused_ordering(435) 00:09:48.749 fused_ordering(436) 00:09:48.749 fused_ordering(437) 00:09:48.749 fused_ordering(438) 00:09:48.749 fused_ordering(439) 00:09:48.749 fused_ordering(440) 00:09:48.749 fused_ordering(441) 00:09:48.749 fused_ordering(442) 00:09:48.749 fused_ordering(443) 00:09:48.749 fused_ordering(444) 00:09:48.749 fused_ordering(445) 00:09:48.749 fused_ordering(446) 00:09:48.749 fused_ordering(447) 00:09:48.749 fused_ordering(448) 00:09:48.749 fused_ordering(449) 00:09:48.749 fused_ordering(450) 00:09:48.749 fused_ordering(451) 00:09:48.749 fused_ordering(452) 00:09:48.749 fused_ordering(453) 00:09:48.749 fused_ordering(454) 00:09:48.749 fused_ordering(455) 00:09:48.749 fused_ordering(456) 00:09:48.749 fused_ordering(457) 00:09:48.749 fused_ordering(458) 00:09:48.749 fused_ordering(459) 00:09:48.749 fused_ordering(460) 00:09:48.749 fused_ordering(461) 00:09:48.749 fused_ordering(462) 00:09:48.749 fused_ordering(463) 00:09:48.749 fused_ordering(464) 00:09:48.749 fused_ordering(465) 00:09:48.749 fused_ordering(466) 00:09:48.749 fused_ordering(467) 00:09:48.749 fused_ordering(468) 00:09:48.749 fused_ordering(469) 00:09:48.749 fused_ordering(470) 00:09:48.749 fused_ordering(471) 00:09:48.749 fused_ordering(472) 00:09:48.749 fused_ordering(473) 00:09:48.749 fused_ordering(474) 00:09:48.749 fused_ordering(475) 00:09:48.749 fused_ordering(476) 00:09:48.749 fused_ordering(477) 00:09:48.749 fused_ordering(478) 00:09:48.749 fused_ordering(479) 00:09:48.749 fused_ordering(480) 00:09:48.749 fused_ordering(481) 00:09:48.749 fused_ordering(482) 00:09:48.749 fused_ordering(483) 00:09:48.749 fused_ordering(484) 00:09:48.749 fused_ordering(485) 00:09:48.749 fused_ordering(486) 00:09:48.749 fused_ordering(487) 00:09:48.749 fused_ordering(488) 00:09:48.749 fused_ordering(489) 00:09:48.749 fused_ordering(490) 00:09:48.749 fused_ordering(491) 00:09:48.749 fused_ordering(492) 00:09:48.749 fused_ordering(493) 00:09:48.749 fused_ordering(494) 00:09:48.749 fused_ordering(495) 00:09:48.749 fused_ordering(496) 00:09:48.749 fused_ordering(497) 00:09:48.749 fused_ordering(498) 00:09:48.749 fused_ordering(499) 00:09:48.749 fused_ordering(500) 00:09:48.749 fused_ordering(501) 00:09:48.749 fused_ordering(502) 00:09:48.749 fused_ordering(503) 00:09:48.749 fused_ordering(504) 00:09:48.749 fused_ordering(505) 00:09:48.749 fused_ordering(506) 00:09:48.749 fused_ordering(507) 00:09:48.749 fused_ordering(508) 00:09:48.749 fused_ordering(509) 00:09:48.749 fused_ordering(510) 00:09:48.749 fused_ordering(511) 00:09:48.749 fused_ordering(512) 00:09:48.749 fused_ordering(513) 00:09:48.749 fused_ordering(514) 00:09:48.749 fused_ordering(515) 00:09:48.749 fused_ordering(516) 00:09:48.749 fused_ordering(517) 00:09:48.749 fused_ordering(518) 00:09:48.749 fused_ordering(519) 00:09:48.749 fused_ordering(520) 00:09:48.749 fused_ordering(521) 00:09:48.749 fused_ordering(522) 00:09:48.749 fused_ordering(523) 00:09:48.749 fused_ordering(524) 00:09:48.749 fused_ordering(525) 00:09:48.749 fused_ordering(526) 00:09:48.749 fused_ordering(527) 00:09:48.749 fused_ordering(528) 00:09:48.749 fused_ordering(529) 00:09:48.749 fused_ordering(530) 00:09:48.749 fused_ordering(531) 00:09:48.749 fused_ordering(532) 00:09:48.749 fused_ordering(533) 00:09:48.749 fused_ordering(534) 00:09:48.749 fused_ordering(535) 00:09:48.749 fused_ordering(536) 00:09:48.749 fused_ordering(537) 00:09:48.749 fused_ordering(538) 00:09:48.749 fused_ordering(539) 00:09:48.749 fused_ordering(540) 00:09:48.749 fused_ordering(541) 00:09:48.749 fused_ordering(542) 00:09:48.749 fused_ordering(543) 00:09:48.749 fused_ordering(544) 00:09:48.749 fused_ordering(545) 00:09:48.749 fused_ordering(546) 00:09:48.749 fused_ordering(547) 00:09:48.749 fused_ordering(548) 00:09:48.749 fused_ordering(549) 00:09:48.749 fused_ordering(550) 00:09:48.749 fused_ordering(551) 00:09:48.749 fused_ordering(552) 00:09:48.749 fused_ordering(553) 00:09:48.749 fused_ordering(554) 00:09:48.749 fused_ordering(555) 00:09:48.749 fused_ordering(556) 00:09:48.749 fused_ordering(557) 00:09:48.749 fused_ordering(558) 00:09:48.749 fused_ordering(559) 00:09:48.749 fused_ordering(560) 00:09:48.749 fused_ordering(561) 00:09:48.749 fused_ordering(562) 00:09:48.749 fused_ordering(563) 00:09:48.749 fused_ordering(564) 00:09:48.749 fused_ordering(565) 00:09:48.749 fused_ordering(566) 00:09:48.749 fused_ordering(567) 00:09:48.749 fused_ordering(568) 00:09:48.749 fused_ordering(569) 00:09:48.749 fused_ordering(570) 00:09:48.749 fused_ordering(571) 00:09:48.749 fused_ordering(572) 00:09:48.749 fused_ordering(573) 00:09:48.749 fused_ordering(574) 00:09:48.749 fused_ordering(575) 00:09:48.749 fused_ordering(576) 00:09:48.749 fused_ordering(577) 00:09:48.749 fused_ordering(578) 00:09:48.749 fused_ordering(579) 00:09:48.749 fused_ordering(580) 00:09:48.749 fused_ordering(581) 00:09:48.749 fused_ordering(582) 00:09:48.749 fused_ordering(583) 00:09:48.749 fused_ordering(584) 00:09:48.749 fused_ordering(585) 00:09:48.749 fused_ordering(586) 00:09:48.749 fused_ordering(587) 00:09:48.749 fused_ordering(588) 00:09:48.749 fused_ordering(589) 00:09:48.749 fused_ordering(590) 00:09:48.749 fused_ordering(591) 00:09:48.749 fused_ordering(592) 00:09:48.749 fused_ordering(593) 00:09:48.749 fused_ordering(594) 00:09:48.749 fused_ordering(595) 00:09:48.749 fused_ordering(596) 00:09:48.749 fused_ordering(597) 00:09:48.749 fused_ordering(598) 00:09:48.749 fused_ordering(599) 00:09:48.749 fused_ordering(600) 00:09:48.749 fused_ordering(601) 00:09:48.749 fused_ordering(602) 00:09:48.749 fused_ordering(603) 00:09:48.749 fused_ordering(604) 00:09:48.749 fused_ordering(605) 00:09:48.749 fused_ordering(606) 00:09:48.749 fused_ordering(607) 00:09:48.749 fused_ordering(608) 00:09:48.749 fused_ordering(609) 00:09:48.749 fused_ordering(610) 00:09:48.749 fused_ordering(611) 00:09:48.749 fused_ordering(612) 00:09:48.749 fused_ordering(613) 00:09:48.749 fused_ordering(614) 00:09:48.749 fused_ordering(615) 00:09:49.315 fused_ordering(616) 00:09:49.315 fused_ordering(617) 00:09:49.315 fused_ordering(618) 00:09:49.315 fused_ordering(619) 00:09:49.315 fused_ordering(620) 00:09:49.315 fused_ordering(621) 00:09:49.315 fused_ordering(622) 00:09:49.315 fused_ordering(623) 00:09:49.315 fused_ordering(624) 00:09:49.315 fused_ordering(625) 00:09:49.315 fused_ordering(626) 00:09:49.315 fused_ordering(627) 00:09:49.315 fused_ordering(628) 00:09:49.315 fused_ordering(629) 00:09:49.315 fused_ordering(630) 00:09:49.315 fused_ordering(631) 00:09:49.315 fused_ordering(632) 00:09:49.315 fused_ordering(633) 00:09:49.315 fused_ordering(634) 00:09:49.315 fused_ordering(635) 00:09:49.315 fused_ordering(636) 00:09:49.315 fused_ordering(637) 00:09:49.315 fused_ordering(638) 00:09:49.315 fused_ordering(639) 00:09:49.315 fused_ordering(640) 00:09:49.315 fused_ordering(641) 00:09:49.315 fused_ordering(642) 00:09:49.315 fused_ordering(643) 00:09:49.315 fused_ordering(644) 00:09:49.315 fused_ordering(645) 00:09:49.315 fused_ordering(646) 00:09:49.315 fused_ordering(647) 00:09:49.315 fused_ordering(648) 00:09:49.315 fused_ordering(649) 00:09:49.315 fused_ordering(650) 00:09:49.315 fused_ordering(651) 00:09:49.315 fused_ordering(652) 00:09:49.315 fused_ordering(653) 00:09:49.315 fused_ordering(654) 00:09:49.315 fused_ordering(655) 00:09:49.315 fused_ordering(656) 00:09:49.315 fused_ordering(657) 00:09:49.315 fused_ordering(658) 00:09:49.315 fused_ordering(659) 00:09:49.315 fused_ordering(660) 00:09:49.315 fused_ordering(661) 00:09:49.315 fused_ordering(662) 00:09:49.315 fused_ordering(663) 00:09:49.315 fused_ordering(664) 00:09:49.315 fused_ordering(665) 00:09:49.315 fused_ordering(666) 00:09:49.315 fused_ordering(667) 00:09:49.315 fused_ordering(668) 00:09:49.315 fused_ordering(669) 00:09:49.315 fused_ordering(670) 00:09:49.315 fused_ordering(671) 00:09:49.315 fused_ordering(672) 00:09:49.315 fused_ordering(673) 00:09:49.315 fused_ordering(674) 00:09:49.315 fused_ordering(675) 00:09:49.315 fused_ordering(676) 00:09:49.315 fused_ordering(677) 00:09:49.315 fused_ordering(678) 00:09:49.315 fused_ordering(679) 00:09:49.315 fused_ordering(680) 00:09:49.315 fused_ordering(681) 00:09:49.315 fused_ordering(682) 00:09:49.315 fused_ordering(683) 00:09:49.315 fused_ordering(684) 00:09:49.315 fused_ordering(685) 00:09:49.315 fused_ordering(686) 00:09:49.315 fused_ordering(687) 00:09:49.315 fused_ordering(688) 00:09:49.315 fused_ordering(689) 00:09:49.315 fused_ordering(690) 00:09:49.315 fused_ordering(691) 00:09:49.315 fused_ordering(692) 00:09:49.315 fused_ordering(693) 00:09:49.315 fused_ordering(694) 00:09:49.315 fused_ordering(695) 00:09:49.315 fused_ordering(696) 00:09:49.315 fused_ordering(697) 00:09:49.315 fused_ordering(698) 00:09:49.315 fused_ordering(699) 00:09:49.315 fused_ordering(700) 00:09:49.315 fused_ordering(701) 00:09:49.315 fused_ordering(702) 00:09:49.315 fused_ordering(703) 00:09:49.315 fused_ordering(704) 00:09:49.315 fused_ordering(705) 00:09:49.315 fused_ordering(706) 00:09:49.315 fused_ordering(707) 00:09:49.315 fused_ordering(708) 00:09:49.315 fused_ordering(709) 00:09:49.315 fused_ordering(710) 00:09:49.315 fused_ordering(711) 00:09:49.315 fused_ordering(712) 00:09:49.315 fused_ordering(713) 00:09:49.315 fused_ordering(714) 00:09:49.315 fused_ordering(715) 00:09:49.315 fused_ordering(716) 00:09:49.315 fused_ordering(717) 00:09:49.315 fused_ordering(718) 00:09:49.315 fused_ordering(719) 00:09:49.315 fused_ordering(720) 00:09:49.315 fused_ordering(721) 00:09:49.315 fused_ordering(722) 00:09:49.315 fused_ordering(723) 00:09:49.315 fused_ordering(724) 00:09:49.315 fused_ordering(725) 00:09:49.315 fused_ordering(726) 00:09:49.315 fused_ordering(727) 00:09:49.315 fused_ordering(728) 00:09:49.315 fused_ordering(729) 00:09:49.315 fused_ordering(730) 00:09:49.315 fused_ordering(731) 00:09:49.315 fused_ordering(732) 00:09:49.315 fused_ordering(733) 00:09:49.315 fused_ordering(734) 00:09:49.315 fused_ordering(735) 00:09:49.315 fused_ordering(736) 00:09:49.315 fused_ordering(737) 00:09:49.315 fused_ordering(738) 00:09:49.315 fused_ordering(739) 00:09:49.315 fused_ordering(740) 00:09:49.315 fused_ordering(741) 00:09:49.315 fused_ordering(742) 00:09:49.315 fused_ordering(743) 00:09:49.315 fused_ordering(744) 00:09:49.315 fused_ordering(745) 00:09:49.315 fused_ordering(746) 00:09:49.315 fused_ordering(747) 00:09:49.315 fused_ordering(748) 00:09:49.315 fused_ordering(749) 00:09:49.315 fused_ordering(750) 00:09:49.315 fused_ordering(751) 00:09:49.315 fused_ordering(752) 00:09:49.315 fused_ordering(753) 00:09:49.315 fused_ordering(754) 00:09:49.315 fused_ordering(755) 00:09:49.315 fused_ordering(756) 00:09:49.315 fused_ordering(757) 00:09:49.315 fused_ordering(758) 00:09:49.315 fused_ordering(759) 00:09:49.315 fused_ordering(760) 00:09:49.315 fused_ordering(761) 00:09:49.315 fused_ordering(762) 00:09:49.315 fused_ordering(763) 00:09:49.315 fused_ordering(764) 00:09:49.315 fused_ordering(765) 00:09:49.315 fused_ordering(766) 00:09:49.315 fused_ordering(767) 00:09:49.315 fused_ordering(768) 00:09:49.315 fused_ordering(769) 00:09:49.315 fused_ordering(770) 00:09:49.315 fused_ordering(771) 00:09:49.315 fused_ordering(772) 00:09:49.315 fused_ordering(773) 00:09:49.315 fused_ordering(774) 00:09:49.315 fused_ordering(775) 00:09:49.315 fused_ordering(776) 00:09:49.315 fused_ordering(777) 00:09:49.315 fused_ordering(778) 00:09:49.315 fused_ordering(779) 00:09:49.315 fused_ordering(780) 00:09:49.315 fused_ordering(781) 00:09:49.315 fused_ordering(782) 00:09:49.315 fused_ordering(783) 00:09:49.315 fused_ordering(784) 00:09:49.315 fused_ordering(785) 00:09:49.315 fused_ordering(786) 00:09:49.315 fused_ordering(787) 00:09:49.315 fused_ordering(788) 00:09:49.315 fused_ordering(789) 00:09:49.315 fused_ordering(790) 00:09:49.315 fused_ordering(791) 00:09:49.315 fused_ordering(792) 00:09:49.315 fused_ordering(793) 00:09:49.315 fused_ordering(794) 00:09:49.315 fused_ordering(795) 00:09:49.315 fused_ordering(796) 00:09:49.315 fused_ordering(797) 00:09:49.315 fused_ordering(798) 00:09:49.315 fused_ordering(799) 00:09:49.315 fused_ordering(800) 00:09:49.315 fused_ordering(801) 00:09:49.315 fused_ordering(802) 00:09:49.315 fused_ordering(803) 00:09:49.315 fused_ordering(804) 00:09:49.315 fused_ordering(805) 00:09:49.315 fused_ordering(806) 00:09:49.315 fused_ordering(807) 00:09:49.315 fused_ordering(808) 00:09:49.315 fused_ordering(809) 00:09:49.315 fused_ordering(810) 00:09:49.315 fused_ordering(811) 00:09:49.315 fused_ordering(812) 00:09:49.315 fused_ordering(813) 00:09:49.315 fused_ordering(814) 00:09:49.315 fused_ordering(815) 00:09:49.315 fused_ordering(816) 00:09:49.315 fused_ordering(817) 00:09:49.315 fused_ordering(818) 00:09:49.315 fused_ordering(819) 00:09:49.315 fused_ordering(820) 00:09:49.883 fused_o[2024-05-15 03:04:20.744360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dd700 is same with the state(5) to be set 00:09:49.883 rdering(821) 00:09:49.883 fused_ordering(822) 00:09:49.883 fused_ordering(823) 00:09:49.883 fused_ordering(824) 00:09:49.883 fused_ordering(825) 00:09:49.883 fused_ordering(826) 00:09:49.883 fused_ordering(827) 00:09:49.883 fused_ordering(828) 00:09:49.883 fused_ordering(829) 00:09:49.883 fused_ordering(830) 00:09:49.883 fused_ordering(831) 00:09:49.883 fused_ordering(832) 00:09:49.883 fused_ordering(833) 00:09:49.883 fused_ordering(834) 00:09:49.883 fused_ordering(835) 00:09:49.883 fused_ordering(836) 00:09:49.883 fused_ordering(837) 00:09:49.883 fused_ordering(838) 00:09:49.883 fused_ordering(839) 00:09:49.883 fused_ordering(840) 00:09:49.883 fused_ordering(841) 00:09:49.883 fused_ordering(842) 00:09:49.883 fused_ordering(843) 00:09:49.883 fused_ordering(844) 00:09:49.883 fused_ordering(845) 00:09:49.883 fused_ordering(846) 00:09:49.883 fused_ordering(847) 00:09:49.883 fused_ordering(848) 00:09:49.883 fused_ordering(849) 00:09:49.883 fused_ordering(850) 00:09:49.883 fused_ordering(851) 00:09:49.883 fused_ordering(852) 00:09:49.883 fused_ordering(853) 00:09:49.883 fused_ordering(854) 00:09:49.883 fused_ordering(855) 00:09:49.883 fused_ordering(856) 00:09:49.883 fused_ordering(857) 00:09:49.883 fused_ordering(858) 00:09:49.883 fused_ordering(859) 00:09:49.883 fused_ordering(860) 00:09:49.883 fused_ordering(861) 00:09:49.883 fused_ordering(862) 00:09:49.883 fused_ordering(863) 00:09:49.883 fused_ordering(864) 00:09:49.883 fused_ordering(865) 00:09:49.883 fused_ordering(866) 00:09:49.883 fused_ordering(867) 00:09:49.883 fused_ordering(868) 00:09:49.883 fused_ordering(869) 00:09:49.883 fused_ordering(870) 00:09:49.883 fused_ordering(871) 00:09:49.883 fused_ordering(872) 00:09:49.883 fused_ordering(873) 00:09:49.883 fused_ordering(874) 00:09:49.883 fused_ordering(875) 00:09:49.883 fused_ordering(876) 00:09:49.883 fused_ordering(877) 00:09:49.883 fused_ordering(878) 00:09:49.883 fused_ordering(879) 00:09:49.883 fused_ordering(880) 00:09:49.883 fused_ordering(881) 00:09:49.883 fused_ordering(882) 00:09:49.883 fused_ordering(883) 00:09:49.883 fused_ordering(884) 00:09:49.883 fused_ordering(885) 00:09:49.883 fused_ordering(886) 00:09:49.883 fused_ordering(887) 00:09:49.883 fused_ordering(888) 00:09:49.883 fused_ordering(889) 00:09:49.883 fused_ordering(890) 00:09:49.883 fused_ordering(891) 00:09:49.883 fused_ordering(892) 00:09:49.883 fused_ordering(893) 00:09:49.883 fused_ordering(894) 00:09:49.883 fused_ordering(895) 00:09:49.883 fused_ordering(896) 00:09:49.883 fused_ordering(897) 00:09:49.883 fused_ordering(898) 00:09:49.883 fused_ordering(899) 00:09:49.883 fused_ordering(900) 00:09:49.883 fused_ordering(901) 00:09:49.883 fused_ordering(902) 00:09:49.883 fused_ordering(903) 00:09:49.883 fused_ordering(904) 00:09:49.883 fused_ordering(905) 00:09:49.883 fused_ordering(906) 00:09:49.883 fused_ordering(907) 00:09:49.883 fused_ordering(908) 00:09:49.883 fused_ordering(909) 00:09:49.883 fused_ordering(910) 00:09:49.883 fused_ordering(911) 00:09:49.883 fused_ordering(912) 00:09:49.883 fused_ordering(913) 00:09:49.883 fused_ordering(914) 00:09:49.883 fused_ordering(915) 00:09:49.883 fused_ordering(916) 00:09:49.883 fused_ordering(917) 00:09:49.883 fused_ordering(918) 00:09:49.883 fused_ordering(919) 00:09:49.883 fused_ordering(920) 00:09:49.883 fused_ordering(921) 00:09:49.883 fused_ordering(922) 00:09:49.883 fused_ordering(923) 00:09:49.883 fused_ordering(924) 00:09:49.883 fused_ordering(925) 00:09:49.883 fused_ordering(926) 00:09:49.883 fused_ordering(927) 00:09:49.883 fused_ordering(928) 00:09:49.883 fused_ordering(929) 00:09:49.883 fused_ordering(930) 00:09:49.883 fused_ordering(931) 00:09:49.883 fused_ordering(932) 00:09:49.883 fused_ordering(933) 00:09:49.883 fused_ordering(934) 00:09:49.883 fused_ordering(935) 00:09:49.883 fused_ordering(936) 00:09:49.883 fused_ordering(937) 00:09:49.883 fused_ordering(938) 00:09:49.883 fused_ordering(939) 00:09:49.883 fused_ordering(940) 00:09:49.883 fused_ordering(941) 00:09:49.883 fused_ordering(942) 00:09:49.883 fused_ordering(943) 00:09:49.883 fused_ordering(944) 00:09:49.883 fused_ordering(945) 00:09:49.883 fused_ordering(946) 00:09:49.883 fused_ordering(947) 00:09:49.883 fused_ordering(948) 00:09:49.883 fused_ordering(949) 00:09:49.883 fused_ordering(950) 00:09:49.883 fused_ordering(951) 00:09:49.883 fused_ordering(952) 00:09:49.883 fused_ordering(953) 00:09:49.883 fused_ordering(954) 00:09:49.883 fused_ordering(955) 00:09:49.883 fused_ordering(956) 00:09:49.883 fused_ordering(957) 00:09:49.883 fused_ordering(958) 00:09:49.883 fused_ordering(959) 00:09:49.883 fused_ordering(960) 00:09:49.883 fused_ordering(961) 00:09:49.883 fused_ordering(962) 00:09:49.883 fused_ordering(963) 00:09:49.883 fused_ordering(964) 00:09:49.883 fused_ordering(965) 00:09:49.883 fused_ordering(966) 00:09:49.883 fused_ordering(967) 00:09:49.883 fused_ordering(968) 00:09:49.883 fused_ordering(969) 00:09:49.883 fused_ordering(970) 00:09:49.883 fused_ordering(971) 00:09:49.883 fused_ordering(972) 00:09:49.883 fused_ordering(973) 00:09:49.883 fused_ordering(974) 00:09:49.883 fused_ordering(975) 00:09:49.883 fused_ordering(976) 00:09:49.883 fused_ordering(977) 00:09:49.883 fused_ordering(978) 00:09:49.883 fused_ordering(979) 00:09:49.883 fused_ordering(980) 00:09:49.883 fused_ordering(981) 00:09:49.883 fused_ordering(982) 00:09:49.883 fused_ordering(983) 00:09:49.883 fused_ordering(984) 00:09:49.883 fused_ordering(985) 00:09:49.883 fused_ordering(986) 00:09:49.883 fused_ordering(987) 00:09:49.883 fused_ordering(988) 00:09:49.883 fused_ordering(989) 00:09:49.883 fused_ordering(990) 00:09:49.883 fused_ordering(991) 00:09:49.883 fused_ordering(992) 00:09:49.883 fused_ordering(993) 00:09:49.883 fused_ordering(994) 00:09:49.883 fused_ordering(995) 00:09:49.883 fused_ordering(996) 00:09:49.883 fused_ordering(997) 00:09:49.883 fused_ordering(998) 00:09:49.883 fused_ordering(999) 00:09:49.883 fused_ordering(1000) 00:09:49.883 fused_ordering(1001) 00:09:49.883 fused_ordering(1002) 00:09:49.883 fused_ordering(1003) 00:09:49.883 fused_ordering(1004) 00:09:49.883 fused_ordering(1005) 00:09:49.883 fused_ordering(1006) 00:09:49.883 fused_ordering(1007) 00:09:49.883 fused_ordering(1008) 00:09:49.883 fused_ordering(1009) 00:09:49.883 fused_ordering(1010) 00:09:49.883 fused_ordering(1011) 00:09:49.883 fused_ordering(1012) 00:09:49.883 fused_ordering(1013) 00:09:49.883 fused_ordering(1014) 00:09:49.883 fused_ordering(1015) 00:09:49.883 fused_ordering(1016) 00:09:49.883 fused_ordering(1017) 00:09:49.884 fused_ordering(1018) 00:09:49.884 fused_ordering(1019) 00:09:49.884 fused_ordering(1020) 00:09:49.884 fused_ordering(1021) 00:09:49.884 fused_ordering(1022) 00:09:49.884 fused_ordering(1023) 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.884 rmmod nvme_tcp 00:09:49.884 rmmod nvme_fabrics 00:09:49.884 rmmod nvme_keyring 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 944727 ']' 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 944727 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 944727 ']' 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 944727 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 944727 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 944727' 00:09:49.884 killing process with pid 944727 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 944727 00:09:49.884 [2024-05-15 03:04:20.870462] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:49.884 03:04:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 944727 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.144 03:04:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.052 03:04:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.052 00:09:52.052 real 0m10.930s 00:09:52.052 user 0m5.565s 00:09:52.052 sys 0m5.645s 00:09:52.052 03:04:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.052 03:04:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:52.052 ************************************ 00:09:52.052 END TEST nvmf_fused_ordering 00:09:52.052 ************************************ 00:09:52.052 03:04:23 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:52.052 03:04:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:52.052 03:04:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.052 03:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.052 ************************************ 00:09:52.052 START TEST nvmf_delete_subsystem 00:09:52.052 ************************************ 00:09:52.052 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:52.311 * Looking for test storage... 00:09:52.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:52.311 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.312 03:04:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:57.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:57.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:57.582 Found net devices under 0000:86:00.0: cvl_0_0 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:57.582 Found net devices under 0000:86:00.1: cvl_0_1 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.582 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.841 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.841 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.841 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:57.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:09:57.841 00:09:57.842 --- 10.0.0.2 ping statistics --- 00:09:57.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.842 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:09:57.842 00:09:57.842 --- 10.0.0.1 ping statistics --- 00:09:57.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.842 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=948715 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 948715 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 948715 ']' 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:57.842 03:04:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.842 [2024-05-15 03:04:28.878747] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:09:57.842 [2024-05-15 03:04:28.878789] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.842 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.842 [2024-05-15 03:04:28.934508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.101 [2024-05-15 03:04:29.006302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.101 [2024-05-15 03:04:29.006342] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.101 [2024-05-15 03:04:29.006350] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.101 [2024-05-15 03:04:29.006356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.101 [2024-05-15 03:04:29.006360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.101 [2024-05-15 03:04:29.006409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.101 [2024-05-15 03:04:29.006412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.667 [2024-05-15 03:04:29.703847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.667 [2024-05-15 03:04:29.719838] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:58.667 [2024-05-15 03:04:29.720036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:58.667 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.668 NULL1 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.668 Delay0 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=948963 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:58.668 03:04:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:58.668 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.668 [2024-05-15 03:04:29.804546] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:00.594 03:04:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.594 03:04:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.594 03:04:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 [2024-05-15 03:04:32.055824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f506400c600 is same with the state(5) to be set 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 starting I/O failed: -6 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 [2024-05-15 03:04:32.056217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13746a0 is same with the state(5) to be set 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 [2024-05-15 03:04:32.056435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5064000c00 is same with the state(5) to be set 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Read completed with error (sct=0, sc=8) 00:10:01.223 Write completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Write completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 Read completed with error (sct=0, sc=8) 00:10:01.224 [2024-05-15 03:04:32.056642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f506400bfe0 is same with the state(5) to be set 00:10:02.160 [2024-05-15 03:04:33.026025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374060 is same with the state(5) to be set 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 [2024-05-15 03:04:33.060537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13750c0 is same with the state(5) to be set 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 [2024-05-15 03:04:33.060638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137cc20 is same with the state(5) to be set 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 [2024-05-15 03:04:33.060742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1375f10 is same with the state(5) to be set 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 Write completed with error (sct=0, sc=8) 00:10:02.160 Read completed with error (sct=0, sc=8) 00:10:02.160 [2024-05-15 03:04:33.061315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f506400c2f0 is same with the state(5) to be set 00:10:02.160 Initializing NVMe Controllers 00:10:02.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:02.160 Controller IO queue size 128, less than required. 00:10:02.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:02.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:02.160 Initialization complete. Launching workers. 00:10:02.160 ======================================================== 00:10:02.160 Latency(us) 00:10:02.160 Device Information : IOPS MiB/s Average min max 00:10:02.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.34 0.08 983198.15 793.59 1046502.48 00:10:02.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.41 0.07 899953.25 621.07 1012885.89 00:10:02.160 ======================================================== 00:10:02.160 Total : 307.75 0.15 942784.10 621.07 1046502.48 00:10:02.160 00:10:02.160 [2024-05-15 03:04:33.061727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374060 (9): Bad file descriptor 00:10:02.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:02.160 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.160 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:02.160 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 948963 00:10:02.160 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 948963 00:10:02.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (948963) - No such process 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 948963 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 948963 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 948963 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.419 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.420 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.678 [2024-05-15 03:04:33.583096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=949631 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.678 03:04:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:02.678 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.678 [2024-05-15 03:04:33.645891] subsystem.c:1536:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:03.243 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.243 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:03.243 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.502 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.502 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:03.502 03:04:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.068 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.069 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:04.069 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.635 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.635 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:04.635 03:04:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.202 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.202 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:05.202 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.460 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.460 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:05.460 03:04:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.718 Initializing NVMe Controllers 00:10:05.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.718 Controller IO queue size 128, less than required. 00:10:05.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:05.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:05.718 Initialization complete. Launching workers. 00:10:05.718 ======================================================== 00:10:05.718 Latency(us) 00:10:05.718 Device Information : IOPS MiB/s Average min max 00:10:05.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003366.48 1000146.07 1041403.78 00:10:05.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005732.27 1000227.83 1042416.76 00:10:05.718 ======================================================== 00:10:05.718 Total : 256.00 0.12 1004549.38 1000146.07 1042416.76 00:10:05.718 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 949631 00:10:05.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (949631) - No such process 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 949631 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.977 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.977 rmmod nvme_tcp 00:10:06.236 rmmod nvme_fabrics 00:10:06.236 rmmod nvme_keyring 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 948715 ']' 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 948715 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 948715 ']' 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 948715 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 948715 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 948715' 00:10:06.236 killing process with pid 948715 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 948715 00:10:06.236 [2024-05-15 03:04:37.225661] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:06.236 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 948715 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.495 03:04:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.399 03:04:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.399 00:10:08.399 real 0m16.283s 00:10:08.399 user 0m30.651s 00:10:08.399 sys 0m5.013s 00:10:08.399 03:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.399 03:04:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.399 ************************************ 00:10:08.399 END TEST nvmf_delete_subsystem 00:10:08.399 ************************************ 00:10:08.399 03:04:39 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:08.399 03:04:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:08.399 03:04:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.399 03:04:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.659 ************************************ 00:10:08.659 START TEST nvmf_ns_masking 00:10:08.659 ************************************ 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:08.659 * Looking for test storage... 00:10:08.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=8d2c46d8-1ab9-475b-9fec-03a36a25d7ba 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.659 03:04:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:13.928 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.929 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.929 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.929 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.929 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:13.929 00:10:13.929 --- 10.0.0.2 ping statistics --- 00:10:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.929 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:13.929 00:10:13.929 --- 10.0.0.1 ping statistics --- 00:10:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.929 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.929 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:13.930 03:04:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=953643 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 953643 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 953643 ']' 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:13.930 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:13.930 [2024-05-15 03:04:45.050689] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:13.930 [2024-05-15 03:04:45.050729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.930 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.188 [2024-05-15 03:04:45.108408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.188 [2024-05-15 03:04:45.181913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.188 [2024-05-15 03:04:45.181950] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.188 [2024-05-15 03:04:45.181957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.188 [2024-05-15 03:04:45.181963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.188 [2024-05-15 03:04:45.181968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.188 [2024-05-15 03:04:45.182008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.188 [2024-05-15 03:04:45.182106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.188 [2024-05-15 03:04:45.182167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.188 [2024-05-15 03:04:45.182169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.755 03:04:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:15.015 [2024-05-15 03:04:46.049018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.015 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:15.015 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:15.015 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:15.274 Malloc1 00:10:15.274 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:15.274 Malloc2 00:10:15.532 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.532 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:15.792 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.792 [2024-05-15 03:04:46.952831] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:15.792 [2024-05-15 03:04:46.953067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.051 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:10:16.051 03:04:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8d2c46d8-1ab9-475b-9fec-03a36a25d7ba -a 10.0.0.2 -s 4420 -i 4 00:10:16.051 03:04:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.051 03:04:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:16.051 03:04:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.051 03:04:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:16.051 03:04:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:17.956 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:17.956 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:17.956 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:18.216 [ 0]:0x1 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ae1707b4d86e463faf73dfb9870eed15 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ae1707b4d86e463faf73dfb9870eed15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:18.216 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:18.475 [ 0]:0x1 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ae1707b4d86e463faf73dfb9870eed15 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ae1707b4d86e463faf73dfb9870eed15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:18.475 [ 1]:0x2 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.475 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.733 03:04:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:18.991 03:04:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:10:18.991 03:04:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8d2c46d8-1ab9-475b-9fec-03a36a25d7ba -a 10.0.0.2 -s 4420 -i 4 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:10:19.250 03:04:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.153 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:21.154 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:21.154 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:21.412 [ 0]:0x2 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.412 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:21.412 [ 0]:0x1 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ae1707b4d86e463faf73dfb9870eed15 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ae1707b4d86e463faf73dfb9870eed15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:21.671 [ 1]:0x2 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.671 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:21.672 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.672 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:21.930 [ 0]:0x2 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:21.930 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:21.931 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:21.931 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:21.931 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:10:21.931 03:04:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.931 03:04:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8d2c46d8-1ab9-475b-9fec-03a36a25d7ba -a 10.0.0.2 -s 4420 -i 4 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:22.189 03:04:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:24.724 [ 0]:0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ae1707b4d86e463faf73dfb9870eed15 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ae1707b4d86e463faf73dfb9870eed15 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:24.724 [ 1]:0x2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:24.724 [ 0]:0x2 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:24.724 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:24.997 03:04:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:24.997 [2024-05-15 03:04:56.054474] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:24.997 request: 00:10:24.997 { 00:10:24.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.997 "nsid": 2, 00:10:24.997 "host": "nqn.2016-06.io.spdk:host1", 00:10:24.997 "method": "nvmf_ns_remove_host", 00:10:24.997 "req_id": 1 00:10:24.997 } 00:10:24.997 Got JSON-RPC error response 00:10:24.997 response: 00:10:24.997 { 00:10:24.997 "code": -32602, 00:10:24.997 "message": "Invalid parameters" 00:10:24.997 } 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:24.997 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:24.997 [ 0]:0x2 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f1c81ad5bf4441dad74c94a459ad4a9 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f1c81ad5bf4441dad74c94a459ad4a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.278 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.278 rmmod nvme_tcp 00:10:25.278 rmmod nvme_fabrics 00:10:25.538 rmmod nvme_keyring 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 953643 ']' 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 953643 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 953643 ']' 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 953643 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 953643 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 953643' 00:10:25.538 killing process with pid 953643 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 953643 00:10:25.538 [2024-05-15 03:04:56.523450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:25.538 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 953643 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.797 03:04:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.700 03:04:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.700 00:10:27.700 real 0m19.268s 00:10:27.700 user 0m50.278s 00:10:27.700 sys 0m5.525s 00:10:27.700 03:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.700 03:04:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:27.700 ************************************ 00:10:27.700 END TEST nvmf_ns_masking 00:10:27.700 ************************************ 00:10:27.959 03:04:58 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:27.959 03:04:58 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:27.959 03:04:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:27.959 03:04:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:27.959 03:04:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.959 ************************************ 00:10:27.959 START TEST nvmf_nvme_cli 00:10:27.959 ************************************ 00:10:27.959 03:04:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:27.959 * Looking for test storage... 00:10:27.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.959 03:04:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.960 03:04:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:33.231 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:33.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:33.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:33.232 Found net devices under 0000:86:00.0: cvl_0_0 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:33.232 Found net devices under 0000:86:00.1: cvl_0_1 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:33.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:10:33.232 00:10:33.232 --- 10.0.0.2 ping statistics --- 00:10:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.232 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:33.232 00:10:33.232 --- 10.0.0.1 ping statistics --- 00:10:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.232 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=959095 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 959095 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 959095 ']' 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.232 03:05:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.232 [2024-05-15 03:05:03.875648] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:33.232 [2024-05-15 03:05:03.875702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.232 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.232 [2024-05-15 03:05:03.932846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.232 [2024-05-15 03:05:04.012838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.232 [2024-05-15 03:05:04.012872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.232 [2024-05-15 03:05:04.012880] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.232 [2024-05-15 03:05:04.012886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.232 [2024-05-15 03:05:04.012891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.232 [2024-05-15 03:05:04.012936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.233 [2024-05-15 03:05:04.012955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.233 [2024-05-15 03:05:04.012972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.233 [2024-05-15 03:05:04.012974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 [2024-05-15 03:05:04.723281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 Malloc0 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 Malloc1 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.798 [2024-05-15 03:05:04.804818] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:33.798 [2024-05-15 03:05:04.805042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.798 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:33.799 00:10:33.799 Discovery Log Number of Records 2, Generation counter 2 00:10:33.799 =====Discovery Log Entry 0====== 00:10:33.799 trtype: tcp 00:10:33.799 adrfam: ipv4 00:10:33.799 subtype: current discovery subsystem 00:10:33.799 treq: not required 00:10:33.799 portid: 0 00:10:33.799 trsvcid: 4420 00:10:33.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:33.799 traddr: 10.0.0.2 00:10:33.799 eflags: explicit discovery connections, duplicate discovery information 00:10:33.799 sectype: none 00:10:33.799 =====Discovery Log Entry 1====== 00:10:33.799 trtype: tcp 00:10:33.799 adrfam: ipv4 00:10:33.799 subtype: nvme subsystem 00:10:33.799 treq: not required 00:10:33.799 portid: 0 00:10:33.799 trsvcid: 4420 00:10:33.799 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:33.799 traddr: 10.0.0.2 00:10:33.799 eflags: none 00:10:33.799 sectype: none 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:33.799 03:05:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:35.175 03:05:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:10:37.077 03:05:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:37.077 /dev/nvme0n1 ]] 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.077 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:37.336 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.595 rmmod nvme_tcp 00:10:37.595 rmmod nvme_fabrics 00:10:37.595 rmmod nvme_keyring 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 959095 ']' 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 959095 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 959095 ']' 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 959095 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 959095 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 959095' 00:10:37.595 killing process with pid 959095 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 959095 00:10:37.595 [2024-05-15 03:05:08.718440] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:37.595 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 959095 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.854 03:05:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.392 03:05:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.392 00:10:40.392 real 0m12.117s 00:10:40.392 user 0m20.900s 00:10:40.392 sys 0m4.174s 00:10:40.392 03:05:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.392 03:05:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:40.392 ************************************ 00:10:40.392 END TEST nvmf_nvme_cli 00:10:40.392 ************************************ 00:10:40.392 03:05:11 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:40.392 03:05:11 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:40.392 03:05:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:40.392 03:05:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.392 03:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.392 ************************************ 00:10:40.392 START TEST nvmf_vfio_user 00:10:40.392 ************************************ 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:40.392 * Looking for test storage... 00:10:40.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.392 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=960440 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 960440' 00:10:40.393 Process pid: 960440 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 960440 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 960440 ']' 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.393 03:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:40.393 [2024-05-15 03:05:11.267621] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:40.393 [2024-05-15 03:05:11.267663] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.393 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.393 [2024-05-15 03:05:11.322470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.393 [2024-05-15 03:05:11.402450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.393 [2024-05-15 03:05:11.402491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.393 [2024-05-15 03:05:11.402498] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.393 [2024-05-15 03:05:11.402504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.394 [2024-05-15 03:05:11.402509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.394 [2024-05-15 03:05:11.402549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.394 [2024-05-15 03:05:11.402643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.394 [2024-05-15 03:05:11.402705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.394 [2024-05-15 03:05:11.402707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.962 03:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:40.962 03:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:40.962 03:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:42.338 Malloc1 00:10:42.338 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:42.596 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:42.855 03:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:42.855 [2024-05-15 03:05:14.009868] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:43.113 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:43.113 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:43.113 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:43.113 Malloc2 00:10:43.113 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:43.371 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:43.630 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:43.630 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:43.630 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:43.890 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:43.890 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:43.890 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:43.890 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:43.890 [2024-05-15 03:05:14.818311] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:10:43.890 [2024-05-15 03:05:14.818346] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961004 ] 00:10:43.890 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.890 [2024-05-15 03:05:14.848993] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:43.890 [2024-05-15 03:05:14.852486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:43.890 [2024-05-15 03:05:14.852504] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbf3a48c000 00:10:43.890 [2024-05-15 03:05:14.853487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:43.890 [2024-05-15 03:05:14.854487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:43.890 [2024-05-15 03:05:14.855492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:43.890 [2024-05-15 03:05:14.856500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:43.890 [2024-05-15 03:05:14.857508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:43.890 [2024-05-15 03:05:14.858513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:43.891 [2024-05-15 03:05:14.859520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:43.891 [2024-05-15 03:05:14.860526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:43.891 [2024-05-15 03:05:14.861530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:43.891 [2024-05-15 03:05:14.861542] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbf3a481000 00:10:43.891 [2024-05-15 03:05:14.862484] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:43.891 [2024-05-15 03:05:14.871087] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:43.891 [2024-05-15 03:05:14.871106] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:43.891 [2024-05-15 03:05:14.875615] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:43.891 [2024-05-15 03:05:14.875653] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:43.891 [2024-05-15 03:05:14.875730] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:43.891 [2024-05-15 03:05:14.875744] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:43.891 [2024-05-15 03:05:14.875749] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:43.891 [2024-05-15 03:05:14.876612] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:43.891 [2024-05-15 03:05:14.876620] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:43.891 [2024-05-15 03:05:14.876626] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:43.891 [2024-05-15 03:05:14.877616] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:43.891 [2024-05-15 03:05:14.877623] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:43.891 [2024-05-15 03:05:14.877629] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.878623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:43.891 [2024-05-15 03:05:14.878631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.879626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:43.891 [2024-05-15 03:05:14.879635] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:43.891 [2024-05-15 03:05:14.879639] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.879645] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.879750] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:43.891 [2024-05-15 03:05:14.879754] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.879758] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:43.891 [2024-05-15 03:05:14.880631] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:43.891 [2024-05-15 03:05:14.881632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:43.891 [2024-05-15 03:05:14.882637] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:43.891 [2024-05-15 03:05:14.883635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:43.891 [2024-05-15 03:05:14.883696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:43.891 [2024-05-15 03:05:14.884649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:43.891 [2024-05-15 03:05:14.884657] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:43.891 [2024-05-15 03:05:14.884664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884680] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:43.891 [2024-05-15 03:05:14.884687] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884700] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:43.891 [2024-05-15 03:05:14.884705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:43.891 [2024-05-15 03:05:14.884718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:43.891 [2024-05-15 03:05:14.884755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:43.891 [2024-05-15 03:05:14.884763] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:43.891 [2024-05-15 03:05:14.884768] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:43.891 [2024-05-15 03:05:14.884771] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:43.891 [2024-05-15 03:05:14.884775] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:43.891 [2024-05-15 03:05:14.884779] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:43.891 [2024-05-15 03:05:14.884783] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:43.891 [2024-05-15 03:05:14.884787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884795] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:43.891 [2024-05-15 03:05:14.884824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:43.891 [2024-05-15 03:05:14.884833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.891 [2024-05-15 03:05:14.884841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.891 [2024-05-15 03:05:14.884847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.891 [2024-05-15 03:05:14.884854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:43.891 [2024-05-15 03:05:14.884858] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884866] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:43.891 [2024-05-15 03:05:14.884886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:43.891 [2024-05-15 03:05:14.884893] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:43.891 [2024-05-15 03:05:14.884897] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884910] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:43.891 [2024-05-15 03:05:14.884928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:43.891 [2024-05-15 03:05:14.884968] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:43.891 [2024-05-15 03:05:14.884982] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:43.891 [2024-05-15 03:05:14.884986] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:43.891 [2024-05-15 03:05:14.884991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:43.891 [2024-05-15 03:05:14.885005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:43.891 [2024-05-15 03:05:14.885014] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:43.892 [2024-05-15 03:05:14.885025] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885038] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:43.892 [2024-05-15 03:05:14.885041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:43.892 [2024-05-15 03:05:14.885047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885071] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885084] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:43.892 [2024-05-15 03:05:14.885087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:43.892 [2024-05-15 03:05:14.885093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885122] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885129] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885134] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885142] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:43.892 [2024-05-15 03:05:14.885146] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:43.892 [2024-05-15 03:05:14.885151] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:43.892 [2024-05-15 03:05:14.885169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:43.892 [2024-05-15 03:05:14.885251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:43.892 [2024-05-15 03:05:14.885255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:43.892 [2024-05-15 03:05:14.885257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:43.892 [2024-05-15 03:05:14.885263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:43.892 [2024-05-15 03:05:14.885269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:43.892 [2024-05-15 03:05:14.885273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:43.892 [2024-05-15 03:05:14.885278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885284] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:43.892 [2024-05-15 03:05:14.885288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:43.892 [2024-05-15 03:05:14.885293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:43.892 [2024-05-15 03:05:14.885307] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:43.892 [2024-05-15 03:05:14.885312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:43.892 [2024-05-15 03:05:14.885319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:43.892 [2024-05-15 03:05:14.885348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:43.892 ===================================================== 00:10:43.892 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:43.892 ===================================================== 00:10:43.892 Controller Capabilities/Features 00:10:43.892 ================================ 00:10:43.892 Vendor ID: 4e58 00:10:43.892 Subsystem Vendor ID: 4e58 00:10:43.892 Serial Number: SPDK1 00:10:43.892 Model Number: SPDK bdev Controller 00:10:43.892 Firmware Version: 24.05 00:10:43.892 Recommended Arb Burst: 6 00:10:43.892 IEEE OUI Identifier: 8d 6b 50 00:10:43.892 Multi-path I/O 00:10:43.892 May have multiple subsystem ports: Yes 00:10:43.892 May have multiple controllers: Yes 00:10:43.892 Associated with SR-IOV VF: No 00:10:43.892 Max Data Transfer Size: 131072 00:10:43.892 Max Number of Namespaces: 32 00:10:43.892 Max Number of I/O Queues: 127 00:10:43.892 NVMe Specification Version (VS): 1.3 00:10:43.892 NVMe Specification Version (Identify): 1.3 00:10:43.892 Maximum Queue Entries: 256 00:10:43.892 Contiguous Queues Required: Yes 00:10:43.892 Arbitration Mechanisms Supported 00:10:43.892 Weighted Round Robin: Not Supported 00:10:43.892 Vendor Specific: Not Supported 00:10:43.892 Reset Timeout: 15000 ms 00:10:43.892 Doorbell Stride: 4 bytes 00:10:43.892 NVM Subsystem Reset: Not Supported 00:10:43.892 Command Sets Supported 00:10:43.892 NVM Command Set: Supported 00:10:43.892 Boot Partition: Not Supported 00:10:43.892 Memory Page Size Minimum: 4096 bytes 00:10:43.892 Memory Page Size Maximum: 4096 bytes 00:10:43.892 Persistent Memory Region: Not Supported 00:10:43.892 Optional Asynchronous Events Supported 00:10:43.892 Namespace Attribute Notices: Supported 00:10:43.892 Firmware Activation Notices: Not Supported 00:10:43.892 ANA Change Notices: Not Supported 00:10:43.892 PLE Aggregate Log Change Notices: Not Supported 00:10:43.892 LBA Status Info Alert Notices: Not Supported 00:10:43.892 EGE Aggregate Log Change Notices: Not Supported 00:10:43.892 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.892 Zone Descriptor Change Notices: Not Supported 00:10:43.892 Discovery Log Change Notices: Not Supported 00:10:43.892 Controller Attributes 00:10:43.892 128-bit Host Identifier: Supported 00:10:43.892 Non-Operational Permissive Mode: Not Supported 00:10:43.892 NVM Sets: Not Supported 00:10:43.892 Read Recovery Levels: Not Supported 00:10:43.892 Endurance Groups: Not Supported 00:10:43.892 Predictable Latency Mode: Not Supported 00:10:43.892 Traffic Based Keep ALive: Not Supported 00:10:43.892 Namespace Granularity: Not Supported 00:10:43.892 SQ Associations: Not Supported 00:10:43.892 UUID List: Not Supported 00:10:43.892 Multi-Domain Subsystem: Not Supported 00:10:43.892 Fixed Capacity Management: Not Supported 00:10:43.892 Variable Capacity Management: Not Supported 00:10:43.892 Delete Endurance Group: Not Supported 00:10:43.892 Delete NVM Set: Not Supported 00:10:43.892 Extended LBA Formats Supported: Not Supported 00:10:43.893 Flexible Data Placement Supported: Not Supported 00:10:43.893 00:10:43.893 Controller Memory Buffer Support 00:10:43.893 ================================ 00:10:43.893 Supported: No 00:10:43.893 00:10:43.893 Persistent Memory Region Support 00:10:43.893 ================================ 00:10:43.893 Supported: No 00:10:43.893 00:10:43.893 Admin Command Set Attributes 00:10:43.893 ============================ 00:10:43.893 Security Send/Receive: Not Supported 00:10:43.893 Format NVM: Not Supported 00:10:43.893 Firmware Activate/Download: Not Supported 00:10:43.893 Namespace Management: Not Supported 00:10:43.893 Device Self-Test: Not Supported 00:10:43.893 Directives: Not Supported 00:10:43.893 NVMe-MI: Not Supported 00:10:43.893 Virtualization Management: Not Supported 00:10:43.893 Doorbell Buffer Config: Not Supported 00:10:43.893 Get LBA Status Capability: Not Supported 00:10:43.893 Command & Feature Lockdown Capability: Not Supported 00:10:43.893 Abort Command Limit: 4 00:10:43.893 Async Event Request Limit: 4 00:10:43.893 Number of Firmware Slots: N/A 00:10:43.893 Firmware Slot 1 Read-Only: N/A 00:10:43.893 Firmware Activation Without Reset: N/A 00:10:43.893 Multiple Update Detection Support: N/A 00:10:43.893 Firmware Update Granularity: No Information Provided 00:10:43.893 Per-Namespace SMART Log: No 00:10:43.893 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.893 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:43.893 Command Effects Log Page: Supported 00:10:43.893 Get Log Page Extended Data: Supported 00:10:43.893 Telemetry Log Pages: Not Supported 00:10:43.893 Persistent Event Log Pages: Not Supported 00:10:43.893 Supported Log Pages Log Page: May Support 00:10:43.893 Commands Supported & Effects Log Page: Not Supported 00:10:43.893 Feature Identifiers & Effects Log Page:May Support 00:10:43.893 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.893 Data Area 4 for Telemetry Log: Not Supported 00:10:43.893 Error Log Page Entries Supported: 128 00:10:43.893 Keep Alive: Supported 00:10:43.893 Keep Alive Granularity: 10000 ms 00:10:43.893 00:10:43.893 NVM Command Set Attributes 00:10:43.893 ========================== 00:10:43.893 Submission Queue Entry Size 00:10:43.893 Max: 64 00:10:43.893 Min: 64 00:10:43.893 Completion Queue Entry Size 00:10:43.893 Max: 16 00:10:43.893 Min: 16 00:10:43.893 Number of Namespaces: 32 00:10:43.893 Compare Command: Supported 00:10:43.893 Write Uncorrectable Command: Not Supported 00:10:43.893 Dataset Management Command: Supported 00:10:43.893 Write Zeroes Command: Supported 00:10:43.893 Set Features Save Field: Not Supported 00:10:43.893 Reservations: Not Supported 00:10:43.893 Timestamp: Not Supported 00:10:43.893 Copy: Supported 00:10:43.893 Volatile Write Cache: Present 00:10:43.893 Atomic Write Unit (Normal): 1 00:10:43.893 Atomic Write Unit (PFail): 1 00:10:43.893 Atomic Compare & Write Unit: 1 00:10:43.893 Fused Compare & Write: Supported 00:10:43.893 Scatter-Gather List 00:10:43.893 SGL Command Set: Supported (Dword aligned) 00:10:43.893 SGL Keyed: Not Supported 00:10:43.893 SGL Bit Bucket Descriptor: Not Supported 00:10:43.893 SGL Metadata Pointer: Not Supported 00:10:43.893 Oversized SGL: Not Supported 00:10:43.893 SGL Metadata Address: Not Supported 00:10:43.893 SGL Offset: Not Supported 00:10:43.893 Transport SGL Data Block: Not Supported 00:10:43.893 Replay Protected Memory Block: Not Supported 00:10:43.893 00:10:43.893 Firmware Slot Information 00:10:43.893 ========================= 00:10:43.893 Active slot: 1 00:10:43.893 Slot 1 Firmware Revision: 24.05 00:10:43.893 00:10:43.893 00:10:43.893 Commands Supported and Effects 00:10:43.893 ============================== 00:10:43.893 Admin Commands 00:10:43.893 -------------- 00:10:43.893 Get Log Page (02h): Supported 00:10:43.893 Identify (06h): Supported 00:10:43.893 Abort (08h): Supported 00:10:43.893 Set Features (09h): Supported 00:10:43.893 Get Features (0Ah): Supported 00:10:43.893 Asynchronous Event Request (0Ch): Supported 00:10:43.893 Keep Alive (18h): Supported 00:10:43.893 I/O Commands 00:10:43.893 ------------ 00:10:43.893 Flush (00h): Supported LBA-Change 00:10:43.893 Write (01h): Supported LBA-Change 00:10:43.893 Read (02h): Supported 00:10:43.893 Compare (05h): Supported 00:10:43.893 Write Zeroes (08h): Supported LBA-Change 00:10:43.893 Dataset Management (09h): Supported LBA-Change 00:10:43.893 Copy (19h): Supported LBA-Change 00:10:43.893 Unknown (79h): Supported LBA-Change 00:10:43.893 Unknown (7Ah): Supported 00:10:43.893 00:10:43.893 Error Log 00:10:43.893 ========= 00:10:43.893 00:10:43.893 Arbitration 00:10:43.893 =========== 00:10:43.893 Arbitration Burst: 1 00:10:43.893 00:10:43.893 Power Management 00:10:43.893 ================ 00:10:43.893 Number of Power States: 1 00:10:43.893 Current Power State: Power State #0 00:10:43.893 Power State #0: 00:10:43.893 Max Power: 0.00 W 00:10:43.893 Non-Operational State: Operational 00:10:43.893 Entry Latency: Not Reported 00:10:43.893 Exit Latency: Not Reported 00:10:43.893 Relative Read Throughput: 0 00:10:43.893 Relative Read Latency: 0 00:10:43.893 Relative Write Throughput: 0 00:10:43.893 Relative Write Latency: 0 00:10:43.893 Idle Power: Not Reported 00:10:43.893 Active Power: Not Reported 00:10:43.893 Non-Operational Permissive Mode: Not Supported 00:10:43.893 00:10:43.893 Health Information 00:10:43.893 ================== 00:10:43.893 Critical Warnings: 00:10:43.893 Available Spare Space: OK 00:10:43.893 Temperature: OK 00:10:43.893 Device Reliability: OK 00:10:43.893 Read Only: No 00:10:43.893 Volatile Memory Backup: OK 00:10:43.893 Current Temperature: 0 Kelvin (-2[2024-05-15 03:05:14.885436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:43.893 [2024-05-15 03:05:14.885447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:43.893 [2024-05-15 03:05:14.885474] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:43.893 [2024-05-15 03:05:14.885482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.893 [2024-05-15 03:05:14.885487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.893 [2024-05-15 03:05:14.885493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.893 [2024-05-15 03:05:14.885498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:43.893 [2024-05-15 03:05:14.890472] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:43.893 [2024-05-15 03:05:14.890482] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:43.893 [2024-05-15 03:05:14.890677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:43.893 [2024-05-15 03:05:14.890723] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:43.893 [2024-05-15 03:05:14.890729] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:43.893 [2024-05-15 03:05:14.891685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:43.893 [2024-05-15 03:05:14.891695] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:43.893 [2024-05-15 03:05:14.891742] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:43.893 [2024-05-15 03:05:14.893715] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:43.893 73 Celsius) 00:10:43.893 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:43.893 Available Spare: 0% 00:10:43.893 Available Spare Threshold: 0% 00:10:43.893 Life Percentage Used: 0% 00:10:43.893 Data Units Read: 0 00:10:43.893 Data Units Written: 0 00:10:43.893 Host Read Commands: 0 00:10:43.893 Host Write Commands: 0 00:10:43.893 Controller Busy Time: 0 minutes 00:10:43.893 Power Cycles: 0 00:10:43.893 Power On Hours: 0 hours 00:10:43.893 Unsafe Shutdowns: 0 00:10:43.893 Unrecoverable Media Errors: 0 00:10:43.894 Lifetime Error Log Entries: 0 00:10:43.894 Warning Temperature Time: 0 minutes 00:10:43.894 Critical Temperature Time: 0 minutes 00:10:43.894 00:10:43.894 Number of Queues 00:10:43.894 ================ 00:10:43.894 Number of I/O Submission Queues: 127 00:10:43.894 Number of I/O Completion Queues: 127 00:10:43.894 00:10:43.894 Active Namespaces 00:10:43.894 ================= 00:10:43.894 Namespace ID:1 00:10:43.894 Error Recovery Timeout: Unlimited 00:10:43.894 Command Set Identifier: NVM (00h) 00:10:43.894 Deallocate: Supported 00:10:43.894 Deallocated/Unwritten Error: Not Supported 00:10:43.894 Deallocated Read Value: Unknown 00:10:43.894 Deallocate in Write Zeroes: Not Supported 00:10:43.894 Deallocated Guard Field: 0xFFFF 00:10:43.894 Flush: Supported 00:10:43.894 Reservation: Supported 00:10:43.894 Namespace Sharing Capabilities: Multiple Controllers 00:10:43.894 Size (in LBAs): 131072 (0GiB) 00:10:43.894 Capacity (in LBAs): 131072 (0GiB) 00:10:43.894 Utilization (in LBAs): 131072 (0GiB) 00:10:43.894 NGUID: 55BEEB6CB0BC49FFB9FCF26AF107C84D 00:10:43.894 UUID: 55beeb6c-b0bc-49ff-b9fc-f26af107c84d 00:10:43.894 Thin Provisioning: Not Supported 00:10:43.894 Per-NS Atomic Units: Yes 00:10:43.894 Atomic Boundary Size (Normal): 0 00:10:43.894 Atomic Boundary Size (PFail): 0 00:10:43.894 Atomic Boundary Offset: 0 00:10:43.894 Maximum Single Source Range Length: 65535 00:10:43.894 Maximum Copy Length: 65535 00:10:43.894 Maximum Source Range Count: 1 00:10:43.894 NGUID/EUI64 Never Reused: No 00:10:43.894 Namespace Write Protected: No 00:10:43.894 Number of LBA Formats: 1 00:10:43.894 Current LBA Format: LBA Format #00 00:10:43.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.894 00:10:43.894 03:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:43.894 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.153 [2024-05-15 03:05:15.104231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:49.422 Initializing NVMe Controllers 00:10:49.422 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:49.422 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:49.422 Initialization complete. Launching workers. 00:10:49.422 ======================================================== 00:10:49.422 Latency(us) 00:10:49.422 Device Information : IOPS MiB/s Average min max 00:10:49.422 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39915.17 155.92 3209.17 956.83 9586.53 00:10:49.422 ======================================================== 00:10:49.422 Total : 39915.17 155.92 3209.17 956.83 9586.53 00:10:49.422 00:10:49.422 [2024-05-15 03:05:20.125315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:49.422 03:05:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:49.422 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.422 [2024-05-15 03:05:20.340336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:54.688 Initializing NVMe Controllers 00:10:54.688 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:54.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:54.688 Initialization complete. Launching workers. 00:10:54.688 ======================================================== 00:10:54.688 Latency(us) 00:10:54.688 Device Information : IOPS MiB/s Average min max 00:10:54.688 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.25 62.71 7978.27 6980.53 8039.63 00:10:54.688 ======================================================== 00:10:54.688 Total : 16054.25 62.71 7978.27 6980.53 8039.63 00:10:54.688 00:10:54.688 [2024-05-15 03:05:25.381498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:54.688 03:05:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:54.688 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.688 [2024-05-15 03:05:25.577457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:00.033 [2024-05-15 03:05:30.659830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:00.033 Initializing NVMe Controllers 00:11:00.033 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:00.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:00.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:00.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:00.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:00.033 Initialization complete. Launching workers. 00:11:00.033 Starting thread on core 2 00:11:00.033 Starting thread on core 3 00:11:00.033 Starting thread on core 1 00:11:00.033 03:05:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:00.033 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.033 [2024-05-15 03:05:30.932728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.319 [2024-05-15 03:05:33.988813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.319 Initializing NVMe Controllers 00:11:03.319 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.319 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.319 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:03.319 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:03.319 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:03.319 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:03.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:03.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:03.319 Initialization complete. Launching workers. 00:11:03.319 Starting thread on core 1 with urgent priority queue 00:11:03.319 Starting thread on core 2 with urgent priority queue 00:11:03.319 Starting thread on core 3 with urgent priority queue 00:11:03.319 Starting thread on core 0 with urgent priority queue 00:11:03.319 SPDK bdev Controller (SPDK1 ) core 0: 9593.67 IO/s 10.42 secs/100000 ios 00:11:03.319 SPDK bdev Controller (SPDK1 ) core 1: 9480.33 IO/s 10.55 secs/100000 ios 00:11:03.319 SPDK bdev Controller (SPDK1 ) core 2: 8023.33 IO/s 12.46 secs/100000 ios 00:11:03.319 SPDK bdev Controller (SPDK1 ) core 3: 9365.67 IO/s 10.68 secs/100000 ios 00:11:03.319 ======================================================== 00:11:03.319 00:11:03.319 03:05:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:03.319 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.319 [2024-05-15 03:05:34.257760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.319 Initializing NVMe Controllers 00:11:03.319 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.319 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.319 Namespace ID: 1 size: 0GB 00:11:03.319 Initialization complete. 00:11:03.319 INFO: using host memory buffer for IO 00:11:03.319 Hello world! 00:11:03.319 [2024-05-15 03:05:34.291992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.319 03:05:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:03.319 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.577 [2024-05-15 03:05:34.565877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:04.512 Initializing NVMe Controllers 00:11:04.512 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:04.512 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:04.512 Initialization complete. Launching workers. 00:11:04.512 submit (in ns) avg, min, max = 6203.6, 3238.3, 4995178.3 00:11:04.512 complete (in ns) avg, min, max = 21837.1, 1764.3, 3999430.4 00:11:04.512 00:11:04.512 Submit histogram 00:11:04.512 ================ 00:11:04.512 Range in us Cumulative Count 00:11:04.512 3.228 - 3.242: 0.0120% ( 2) 00:11:04.512 3.242 - 3.256: 0.0181% ( 1) 00:11:04.512 3.256 - 3.270: 0.0301% ( 2) 00:11:04.512 3.270 - 3.283: 0.0481% ( 3) 00:11:04.512 3.283 - 3.297: 0.1144% ( 11) 00:11:04.512 3.297 - 3.311: 0.4454% ( 55) 00:11:04.512 3.311 - 3.325: 1.7093% ( 210) 00:11:04.512 3.325 - 3.339: 3.4728% ( 293) 00:11:04.512 3.339 - 3.353: 5.7237% ( 374) 00:11:04.512 3.353 - 3.367: 9.5937% ( 643) 00:11:04.512 3.367 - 3.381: 14.7638% ( 859) 00:11:04.512 3.381 - 3.395: 20.1866% ( 901) 00:11:04.512 3.395 - 3.409: 26.4219% ( 1036) 00:11:04.512 3.409 - 3.423: 32.1276% ( 948) 00:11:04.512 3.423 - 3.437: 37.5925% ( 908) 00:11:04.512 3.437 - 3.450: 42.4315% ( 804) 00:11:04.512 3.450 - 3.464: 48.2877% ( 973) 00:11:04.513 3.464 - 3.478: 53.7045% ( 900) 00:11:04.513 3.478 - 3.492: 57.9657% ( 708) 00:11:04.513 3.492 - 3.506: 62.6121% ( 772) 00:11:04.513 3.506 - 3.520: 68.7752% ( 1024) 00:11:04.513 3.520 - 3.534: 73.3975% ( 768) 00:11:04.513 3.534 - 3.548: 77.0509% ( 607) 00:11:04.513 3.548 - 3.562: 80.8306% ( 628) 00:11:04.513 3.562 - 3.590: 85.3686% ( 754) 00:11:04.513 3.590 - 3.617: 87.3789% ( 334) 00:11:04.513 3.617 - 3.645: 88.5405% ( 193) 00:11:04.513 3.645 - 3.673: 89.8525% ( 218) 00:11:04.513 3.673 - 3.701: 91.3753% ( 253) 00:11:04.513 3.701 - 3.729: 93.2350% ( 309) 00:11:04.513 3.729 - 3.757: 94.8360% ( 266) 00:11:04.513 3.757 - 3.784: 96.3467% ( 251) 00:11:04.513 3.784 - 3.812: 97.5263% ( 196) 00:11:04.513 3.812 - 3.840: 98.3870% ( 143) 00:11:04.513 3.840 - 3.868: 98.9768% ( 98) 00:11:04.513 3.868 - 3.896: 99.2416% ( 44) 00:11:04.513 3.896 - 3.923: 99.4403% ( 33) 00:11:04.513 3.923 - 3.951: 99.5185% ( 13) 00:11:04.513 3.951 - 3.979: 99.5426% ( 4) 00:11:04.513 3.979 - 4.007: 99.5546% ( 2) 00:11:04.513 4.007 - 4.035: 99.5667% ( 2) 00:11:04.513 4.035 - 4.063: 99.5727% ( 1) 00:11:04.513 4.063 - 4.090: 99.5847% ( 2) 00:11:04.513 4.090 - 4.118: 99.5967% ( 2) 00:11:04.513 4.118 - 4.146: 99.6088% ( 2) 00:11:04.513 4.146 - 4.174: 99.6148% ( 1) 00:11:04.513 4.174 - 4.202: 99.6208% ( 1) 00:11:04.513 4.202 - 4.230: 99.6329% ( 2) 00:11:04.513 4.230 - 4.257: 99.6389% ( 1) 00:11:04.513 4.257 - 4.285: 99.6509% ( 2) 00:11:04.513 4.313 - 4.341: 99.6630% ( 2) 00:11:04.513 4.480 - 4.508: 99.6690% ( 1) 00:11:04.513 4.536 - 4.563: 99.6750% ( 1) 00:11:04.513 5.009 - 5.037: 99.6870% ( 2) 00:11:04.513 5.092 - 5.120: 99.6930% ( 1) 00:11:04.513 5.120 - 5.148: 99.6991% ( 1) 00:11:04.513 5.203 - 5.231: 99.7051% ( 1) 00:11:04.513 5.231 - 5.259: 99.7171% ( 2) 00:11:04.513 5.315 - 5.343: 99.7231% ( 1) 00:11:04.513 5.343 - 5.370: 99.7412% ( 3) 00:11:04.513 5.370 - 5.398: 99.7472% ( 1) 00:11:04.513 5.398 - 5.426: 99.7532% ( 1) 00:11:04.513 5.510 - 5.537: 99.7593% ( 1) 00:11:04.513 5.593 - 5.621: 99.7653% ( 1) 00:11:04.513 5.677 - 5.704: 99.7773% ( 2) 00:11:04.513 5.732 - 5.760: 99.7833% ( 1) 00:11:04.513 5.760 - 5.788: 99.7893% ( 1) 00:11:04.513 5.788 - 5.816: 99.7954% ( 1) 00:11:04.513 5.871 - 5.899: 99.8014% ( 1) 00:11:04.513 5.955 - 5.983: 99.8074% ( 1) 00:11:04.513 6.066 - 6.094: 99.8134% ( 1) 00:11:04.513 6.094 - 6.122: 99.8194% ( 1) 00:11:04.513 6.122 - 6.150: 99.8255% ( 1) 00:11:04.513 6.233 - 6.261: 99.8315% ( 1) 00:11:04.513 6.289 - 6.317: 99.8375% ( 1) 00:11:04.513 6.317 - 6.344: 99.8435% ( 1) 00:11:04.513 6.483 - 6.511: 99.8495% ( 1) 00:11:04.513 7.068 - 7.096: 99.8556% ( 1) 00:11:04.513 7.123 - 7.179: 99.8616% ( 1) 00:11:04.513 [2024-05-15 03:05:35.589653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:04.513 7.179 - 7.235: 99.8676% ( 1) 00:11:04.513 7.235 - 7.290: 99.8736% ( 1) 00:11:04.513 7.402 - 7.457: 99.8796% ( 1) 00:11:04.513 7.457 - 7.513: 99.8856% ( 1) 00:11:04.513 7.680 - 7.736: 99.8977% ( 2) 00:11:04.513 8.125 - 8.181: 99.9037% ( 1) 00:11:04.513 9.238 - 9.294: 99.9097% ( 1) 00:11:04.513 9.517 - 9.572: 99.9157% ( 1) 00:11:04.513 11.687 - 11.743: 99.9218% ( 1) 00:11:04.513 14.470 - 14.581: 99.9278% ( 1) 00:11:04.513 19.478 - 19.590: 99.9338% ( 1) 00:11:04.513 3989.148 - 4017.642: 99.9940% ( 10) 00:11:04.513 4986.435 - 5014.929: 100.0000% ( 1) 00:11:04.513 00:11:04.513 Complete histogram 00:11:04.513 ================== 00:11:04.513 Range in us Cumulative Count 00:11:04.513 1.760 - 1.767: 0.0060% ( 1) 00:11:04.513 1.767 - 1.774: 0.0662% ( 10) 00:11:04.513 1.774 - 1.781: 0.1083% ( 7) 00:11:04.513 1.781 - 1.795: 0.1324% ( 4) 00:11:04.513 1.795 - 1.809: 0.1565% ( 4) 00:11:04.513 1.809 - 1.823: 2.0102% ( 308) 00:11:04.513 1.823 - 1.837: 22.0644% ( 3332) 00:11:04.513 1.837 - 1.850: 31.5618% ( 1578) 00:11:04.513 1.850 - 1.864: 33.3073% ( 290) 00:11:04.513 1.864 - 1.878: 36.6657% ( 558) 00:11:04.513 1.878 - 1.892: 63.7135% ( 4494) 00:11:04.513 1.892 - 1.906: 89.5697% ( 4296) 00:11:04.513 1.906 - 1.920: 93.9392% ( 726) 00:11:04.513 1.920 - 1.934: 96.3888% ( 407) 00:11:04.513 1.934 - 1.948: 97.1712% ( 130) 00:11:04.513 1.948 - 1.962: 97.8092% ( 106) 00:11:04.513 1.962 - 1.976: 98.6277% ( 136) 00:11:04.513 1.976 - 1.990: 99.0250% ( 66) 00:11:04.513 1.990 - 2.003: 99.0852% ( 10) 00:11:04.513 2.003 - 2.017: 99.1092% ( 4) 00:11:04.513 2.017 - 2.031: 99.1273% ( 3) 00:11:04.513 2.031 - 2.045: 99.1333% ( 1) 00:11:04.513 2.045 - 2.059: 99.1514% ( 3) 00:11:04.513 2.059 - 2.073: 99.1634% ( 2) 00:11:04.513 2.087 - 2.101: 99.1694% ( 1) 00:11:04.513 2.115 - 2.129: 99.1754% ( 1) 00:11:04.513 2.129 - 2.143: 99.1815% ( 1) 00:11:04.513 2.143 - 2.157: 99.1875% ( 1) 00:11:04.513 2.157 - 2.170: 99.1935% ( 1) 00:11:04.513 2.170 - 2.184: 99.1995% ( 1) 00:11:04.513 2.184 - 2.198: 99.2055% ( 1) 00:11:04.513 2.198 - 2.212: 99.2176% ( 2) 00:11:04.513 2.240 - 2.254: 99.2236% ( 1) 00:11:04.513 2.254 - 2.268: 99.2416% ( 3) 00:11:04.513 2.282 - 2.296: 99.2537% ( 2) 00:11:04.513 2.310 - 2.323: 99.2597% ( 1) 00:11:04.513 2.323 - 2.337: 99.2717% ( 2) 00:11:04.513 2.351 - 2.365: 99.2778% ( 1) 00:11:04.513 2.379 - 2.393: 99.2838% ( 1) 00:11:04.513 2.435 - 2.449: 99.2898% ( 1) 00:11:04.513 3.534 - 3.548: 99.2958% ( 1) 00:11:04.513 3.548 - 3.562: 99.3018% ( 1) 00:11:04.513 3.701 - 3.729: 99.3079% ( 1) 00:11:04.513 3.757 - 3.784: 99.3139% ( 1) 00:11:04.513 3.784 - 3.812: 99.3199% ( 1) 00:11:04.513 3.979 - 4.007: 99.3259% ( 1) 00:11:04.513 4.035 - 4.063: 99.3440% ( 3) 00:11:04.513 4.090 - 4.118: 99.3500% ( 1) 00:11:04.513 4.118 - 4.146: 99.3560% ( 1) 00:11:04.513 4.174 - 4.202: 99.3620% ( 1) 00:11:04.513 4.202 - 4.230: 99.3680% ( 1) 00:11:04.513 4.230 - 4.257: 99.3741% ( 1) 00:11:04.513 4.257 - 4.285: 99.3861% ( 2) 00:11:04.513 4.313 - 4.341: 99.3921% ( 1) 00:11:04.513 4.369 - 4.397: 99.3981% ( 1) 00:11:04.513 4.619 - 4.647: 99.4042% ( 1) 00:11:04.513 4.647 - 4.675: 99.4102% ( 1) 00:11:04.513 4.897 - 4.925: 99.4162% ( 1) 00:11:04.513 5.064 - 5.092: 99.4222% ( 1) 00:11:04.513 5.231 - 5.259: 99.4342% ( 2) 00:11:04.513 5.565 - 5.593: 99.4403% ( 1) 00:11:04.513 5.704 - 5.732: 99.4463% ( 1) 00:11:04.513 6.066 - 6.094: 99.4523% ( 1) 00:11:04.513 6.150 - 6.177: 99.4583% ( 1) 00:11:04.513 6.400 - 6.428: 99.4643% ( 1) 00:11:04.513 6.428 - 6.456: 99.4704% ( 1) 00:11:04.513 6.567 - 6.595: 99.4764% ( 1) 00:11:04.513 7.290 - 7.346: 99.4824% ( 1) 00:11:04.513 8.237 - 8.292: 99.4884% ( 1) 00:11:04.513 17.586 - 17.697: 99.4944% ( 1) 00:11:04.513 40.070 - 40.292: 99.5005% ( 1) 00:11:04.513 3989.148 - 4017.642: 100.0000% ( 83) 00:11:04.513 00:11:04.513 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:04.513 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:04.513 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:04.513 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:04.514 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:04.784 [ 00:11:04.784 { 00:11:04.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:04.784 "subtype": "Discovery", 00:11:04.784 "listen_addresses": [], 00:11:04.784 "allow_any_host": true, 00:11:04.784 "hosts": [] 00:11:04.784 }, 00:11:04.784 { 00:11:04.784 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:04.784 "subtype": "NVMe", 00:11:04.784 "listen_addresses": [ 00:11:04.784 { 00:11:04.784 "trtype": "VFIOUSER", 00:11:04.784 "adrfam": "IPv4", 00:11:04.784 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:04.784 "trsvcid": "0" 00:11:04.784 } 00:11:04.784 ], 00:11:04.784 "allow_any_host": true, 00:11:04.784 "hosts": [], 00:11:04.784 "serial_number": "SPDK1", 00:11:04.784 "model_number": "SPDK bdev Controller", 00:11:04.784 "max_namespaces": 32, 00:11:04.784 "min_cntlid": 1, 00:11:04.784 "max_cntlid": 65519, 00:11:04.784 "namespaces": [ 00:11:04.784 { 00:11:04.784 "nsid": 1, 00:11:04.784 "bdev_name": "Malloc1", 00:11:04.784 "name": "Malloc1", 00:11:04.784 "nguid": "55BEEB6CB0BC49FFB9FCF26AF107C84D", 00:11:04.784 "uuid": "55beeb6c-b0bc-49ff-b9fc-f26af107c84d" 00:11:04.784 } 00:11:04.784 ] 00:11:04.784 }, 00:11:04.784 { 00:11:04.784 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:04.784 "subtype": "NVMe", 00:11:04.784 "listen_addresses": [ 00:11:04.784 { 00:11:04.784 "trtype": "VFIOUSER", 00:11:04.784 "adrfam": "IPv4", 00:11:04.784 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:04.784 "trsvcid": "0" 00:11:04.784 } 00:11:04.784 ], 00:11:04.784 "allow_any_host": true, 00:11:04.784 "hosts": [], 00:11:04.784 "serial_number": "SPDK2", 00:11:04.784 "model_number": "SPDK bdev Controller", 00:11:04.784 "max_namespaces": 32, 00:11:04.784 "min_cntlid": 1, 00:11:04.784 "max_cntlid": 65519, 00:11:04.784 "namespaces": [ 00:11:04.784 { 00:11:04.784 "nsid": 1, 00:11:04.784 "bdev_name": "Malloc2", 00:11:04.784 "name": "Malloc2", 00:11:04.784 "nguid": "F52F7BB4074349F099247F1AE574458F", 00:11:04.784 "uuid": "f52f7bb4-0743-49f0-9924-7f1ae574458f" 00:11:04.784 } 00:11:04.784 ] 00:11:04.784 } 00:11:04.784 ] 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=964604 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:04.784 03:05:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:04.784 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.043 [2024-05-15 03:05:35.971005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:05.043 Malloc3 00:11:05.043 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:05.043 [2024-05-15 03:05:36.196777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:05.300 Asynchronous Event Request test 00:11:05.300 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:05.300 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:05.300 Registering asynchronous event callbacks... 00:11:05.300 Starting namespace attribute notice tests for all controllers... 00:11:05.300 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:05.300 aer_cb - Changed Namespace 00:11:05.300 Cleaning up... 00:11:05.300 [ 00:11:05.300 { 00:11:05.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:05.300 "subtype": "Discovery", 00:11:05.300 "listen_addresses": [], 00:11:05.300 "allow_any_host": true, 00:11:05.300 "hosts": [] 00:11:05.300 }, 00:11:05.300 { 00:11:05.300 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:05.300 "subtype": "NVMe", 00:11:05.300 "listen_addresses": [ 00:11:05.300 { 00:11:05.300 "trtype": "VFIOUSER", 00:11:05.300 "adrfam": "IPv4", 00:11:05.300 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:05.300 "trsvcid": "0" 00:11:05.300 } 00:11:05.300 ], 00:11:05.300 "allow_any_host": true, 00:11:05.300 "hosts": [], 00:11:05.300 "serial_number": "SPDK1", 00:11:05.300 "model_number": "SPDK bdev Controller", 00:11:05.300 "max_namespaces": 32, 00:11:05.300 "min_cntlid": 1, 00:11:05.300 "max_cntlid": 65519, 00:11:05.300 "namespaces": [ 00:11:05.300 { 00:11:05.300 "nsid": 1, 00:11:05.300 "bdev_name": "Malloc1", 00:11:05.300 "name": "Malloc1", 00:11:05.300 "nguid": "55BEEB6CB0BC49FFB9FCF26AF107C84D", 00:11:05.300 "uuid": "55beeb6c-b0bc-49ff-b9fc-f26af107c84d" 00:11:05.300 }, 00:11:05.300 { 00:11:05.300 "nsid": 2, 00:11:05.300 "bdev_name": "Malloc3", 00:11:05.300 "name": "Malloc3", 00:11:05.300 "nguid": "7A2DFA2B285649309D993D8E48FE4E9B", 00:11:05.300 "uuid": "7a2dfa2b-2856-4930-9d99-3d8e48fe4e9b" 00:11:05.300 } 00:11:05.300 ] 00:11:05.300 }, 00:11:05.300 { 00:11:05.300 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:05.300 "subtype": "NVMe", 00:11:05.300 "listen_addresses": [ 00:11:05.300 { 00:11:05.300 "trtype": "VFIOUSER", 00:11:05.300 "adrfam": "IPv4", 00:11:05.300 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:05.300 "trsvcid": "0" 00:11:05.300 } 00:11:05.300 ], 00:11:05.300 "allow_any_host": true, 00:11:05.300 "hosts": [], 00:11:05.300 "serial_number": "SPDK2", 00:11:05.300 "model_number": "SPDK bdev Controller", 00:11:05.300 "max_namespaces": 32, 00:11:05.300 "min_cntlid": 1, 00:11:05.300 "max_cntlid": 65519, 00:11:05.300 "namespaces": [ 00:11:05.300 { 00:11:05.300 "nsid": 1, 00:11:05.300 "bdev_name": "Malloc2", 00:11:05.300 "name": "Malloc2", 00:11:05.300 "nguid": "F52F7BB4074349F099247F1AE574458F", 00:11:05.300 "uuid": "f52f7bb4-0743-49f0-9924-7f1ae574458f" 00:11:05.300 } 00:11:05.300 ] 00:11:05.300 } 00:11:05.300 ] 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 964604 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:05.300 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:05.300 [2024-05-15 03:05:36.425672] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:05.300 [2024-05-15 03:05:36.425720] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964619 ] 00:11:05.300 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.300 [2024-05-15 03:05:36.453876] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:05.558 [2024-05-15 03:05:36.463720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:05.558 [2024-05-15 03:05:36.463747] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5648108000 00:11:05.558 [2024-05-15 03:05:36.464715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.558 [2024-05-15 03:05:36.465728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.558 [2024-05-15 03:05:36.466728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.558 [2024-05-15 03:05:36.467760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.559 [2024-05-15 03:05:36.468753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.559 [2024-05-15 03:05:36.469756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.559 [2024-05-15 03:05:36.470767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.559 [2024-05-15 03:05:36.471779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.559 [2024-05-15 03:05:36.472792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:05.559 [2024-05-15 03:05:36.472806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f56480fd000 00:11:05.559 [2024-05-15 03:05:36.473749] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:05.559 [2024-05-15 03:05:36.486262] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:05.559 [2024-05-15 03:05:36.486282] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:05.559 [2024-05-15 03:05:36.488336] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:05.559 [2024-05-15 03:05:36.488373] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:05.559 [2024-05-15 03:05:36.488443] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:05.559 [2024-05-15 03:05:36.488456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:05.559 [2024-05-15 03:05:36.488461] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:05.559 [2024-05-15 03:05:36.489470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:05.559 [2024-05-15 03:05:36.489480] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:05.559 [2024-05-15 03:05:36.489486] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:05.559 [2024-05-15 03:05:36.490349] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:05.559 [2024-05-15 03:05:36.490357] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:05.559 [2024-05-15 03:05:36.490364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.491354] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:05.559 [2024-05-15 03:05:36.491362] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.493468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:05.559 [2024-05-15 03:05:36.493476] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:05.559 [2024-05-15 03:05:36.493481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.493486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.493591] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:05.559 [2024-05-15 03:05:36.493595] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.493600] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:05.559 [2024-05-15 03:05:36.494377] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:05.559 [2024-05-15 03:05:36.495384] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:05.559 [2024-05-15 03:05:36.496398] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:05.559 [2024-05-15 03:05:36.497396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:05.559 [2024-05-15 03:05:36.497432] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:05.559 [2024-05-15 03:05:36.498406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:05.559 [2024-05-15 03:05:36.498414] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:05.559 [2024-05-15 03:05:36.498418] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.498435] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:05.559 [2024-05-15 03:05:36.498445] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.498456] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.559 [2024-05-15 03:05:36.498461] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.559 [2024-05-15 03:05:36.498475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.559 [2024-05-15 03:05:36.504471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:05.559 [2024-05-15 03:05:36.504482] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:05.559 [2024-05-15 03:05:36.504487] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:05.559 [2024-05-15 03:05:36.504492] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:05.559 [2024-05-15 03:05:36.504496] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:05.559 [2024-05-15 03:05:36.504501] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:05.559 [2024-05-15 03:05:36.504504] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:05.559 [2024-05-15 03:05:36.504509] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.504518] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.504528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:05.559 [2024-05-15 03:05:36.512469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:05.559 [2024-05-15 03:05:36.512481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.559 [2024-05-15 03:05:36.512488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.559 [2024-05-15 03:05:36.512495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.559 [2024-05-15 03:05:36.512502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.559 [2024-05-15 03:05:36.512507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.512514] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.512522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:05.559 [2024-05-15 03:05:36.520470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:05.559 [2024-05-15 03:05:36.520477] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:05.559 [2024-05-15 03:05:36.520481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.520487] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.520493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:05.559 [2024-05-15 03:05:36.520502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:05.559 [2024-05-15 03:05:36.528468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:05.559 [2024-05-15 03:05:36.528512] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.528520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.528528] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:05.560 [2024-05-15 03:05:36.528533] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:05.560 [2024-05-15 03:05:36.528539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.536471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.536484] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:05.560 [2024-05-15 03:05:36.536492] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.536498] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.536504] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.560 [2024-05-15 03:05:36.536508] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.560 [2024-05-15 03:05:36.536514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.544470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.544482] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.544489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.544495] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.560 [2024-05-15 03:05:36.544499] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.560 [2024-05-15 03:05:36.544504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.552484] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552497] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552503] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552512] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:05.560 [2024-05-15 03:05:36.552516] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:05.560 [2024-05-15 03:05:36.552520] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:05.560 [2024-05-15 03:05:36.552538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.560469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.560481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.568481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.576470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.576482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.584470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.584482] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:05.560 [2024-05-15 03:05:36.584486] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:05.560 [2024-05-15 03:05:36.584489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:05.560 [2024-05-15 03:05:36.584492] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:05.560 [2024-05-15 03:05:36.584498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:05.560 [2024-05-15 03:05:36.584505] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:05.560 [2024-05-15 03:05:36.584508] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:05.560 [2024-05-15 03:05:36.584514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.584520] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:05.560 [2024-05-15 03:05:36.584523] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.560 [2024-05-15 03:05:36.584528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.584538] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:05.560 [2024-05-15 03:05:36.584541] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:05.560 [2024-05-15 03:05:36.584547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:05.560 [2024-05-15 03:05:36.592469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.592482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.592491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:05.560 [2024-05-15 03:05:36.592499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:05.560 ===================================================== 00:11:05.560 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:05.560 ===================================================== 00:11:05.560 Controller Capabilities/Features 00:11:05.560 ================================ 00:11:05.560 Vendor ID: 4e58 00:11:05.560 Subsystem Vendor ID: 4e58 00:11:05.560 Serial Number: SPDK2 00:11:05.560 Model Number: SPDK bdev Controller 00:11:05.560 Firmware Version: 24.05 00:11:05.560 Recommended Arb Burst: 6 00:11:05.560 IEEE OUI Identifier: 8d 6b 50 00:11:05.560 Multi-path I/O 00:11:05.560 May have multiple subsystem ports: Yes 00:11:05.560 May have multiple controllers: Yes 00:11:05.560 Associated with SR-IOV VF: No 00:11:05.560 Max Data Transfer Size: 131072 00:11:05.560 Max Number of Namespaces: 32 00:11:05.560 Max Number of I/O Queues: 127 00:11:05.560 NVMe Specification Version (VS): 1.3 00:11:05.560 NVMe Specification Version (Identify): 1.3 00:11:05.560 Maximum Queue Entries: 256 00:11:05.560 Contiguous Queues Required: Yes 00:11:05.560 Arbitration Mechanisms Supported 00:11:05.560 Weighted Round Robin: Not Supported 00:11:05.560 Vendor Specific: Not Supported 00:11:05.560 Reset Timeout: 15000 ms 00:11:05.560 Doorbell Stride: 4 bytes 00:11:05.560 NVM Subsystem Reset: Not Supported 00:11:05.560 Command Sets Supported 00:11:05.560 NVM Command Set: Supported 00:11:05.560 Boot Partition: Not Supported 00:11:05.560 Memory Page Size Minimum: 4096 bytes 00:11:05.560 Memory Page Size Maximum: 4096 bytes 00:11:05.560 Persistent Memory Region: Not Supported 00:11:05.560 Optional Asynchronous Events Supported 00:11:05.560 Namespace Attribute Notices: Supported 00:11:05.560 Firmware Activation Notices: Not Supported 00:11:05.560 ANA Change Notices: Not Supported 00:11:05.560 PLE Aggregate Log Change Notices: Not Supported 00:11:05.560 LBA Status Info Alert Notices: Not Supported 00:11:05.560 EGE Aggregate Log Change Notices: Not Supported 00:11:05.560 Normal NVM Subsystem Shutdown event: Not Supported 00:11:05.560 Zone Descriptor Change Notices: Not Supported 00:11:05.560 Discovery Log Change Notices: Not Supported 00:11:05.560 Controller Attributes 00:11:05.560 128-bit Host Identifier: Supported 00:11:05.560 Non-Operational Permissive Mode: Not Supported 00:11:05.560 NVM Sets: Not Supported 00:11:05.560 Read Recovery Levels: Not Supported 00:11:05.560 Endurance Groups: Not Supported 00:11:05.560 Predictable Latency Mode: Not Supported 00:11:05.560 Traffic Based Keep ALive: Not Supported 00:11:05.560 Namespace Granularity: Not Supported 00:11:05.561 SQ Associations: Not Supported 00:11:05.561 UUID List: Not Supported 00:11:05.561 Multi-Domain Subsystem: Not Supported 00:11:05.561 Fixed Capacity Management: Not Supported 00:11:05.561 Variable Capacity Management: Not Supported 00:11:05.561 Delete Endurance Group: Not Supported 00:11:05.561 Delete NVM Set: Not Supported 00:11:05.561 Extended LBA Formats Supported: Not Supported 00:11:05.561 Flexible Data Placement Supported: Not Supported 00:11:05.561 00:11:05.561 Controller Memory Buffer Support 00:11:05.561 ================================ 00:11:05.561 Supported: No 00:11:05.561 00:11:05.561 Persistent Memory Region Support 00:11:05.561 ================================ 00:11:05.561 Supported: No 00:11:05.561 00:11:05.561 Admin Command Set Attributes 00:11:05.561 ============================ 00:11:05.561 Security Send/Receive: Not Supported 00:11:05.561 Format NVM: Not Supported 00:11:05.561 Firmware Activate/Download: Not Supported 00:11:05.561 Namespace Management: Not Supported 00:11:05.561 Device Self-Test: Not Supported 00:11:05.561 Directives: Not Supported 00:11:05.561 NVMe-MI: Not Supported 00:11:05.561 Virtualization Management: Not Supported 00:11:05.561 Doorbell Buffer Config: Not Supported 00:11:05.561 Get LBA Status Capability: Not Supported 00:11:05.561 Command & Feature Lockdown Capability: Not Supported 00:11:05.561 Abort Command Limit: 4 00:11:05.561 Async Event Request Limit: 4 00:11:05.561 Number of Firmware Slots: N/A 00:11:05.561 Firmware Slot 1 Read-Only: N/A 00:11:05.561 Firmware Activation Without Reset: N/A 00:11:05.561 Multiple Update Detection Support: N/A 00:11:05.561 Firmware Update Granularity: No Information Provided 00:11:05.561 Per-Namespace SMART Log: No 00:11:05.561 Asymmetric Namespace Access Log Page: Not Supported 00:11:05.561 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:05.561 Command Effects Log Page: Supported 00:11:05.561 Get Log Page Extended Data: Supported 00:11:05.561 Telemetry Log Pages: Not Supported 00:11:05.561 Persistent Event Log Pages: Not Supported 00:11:05.561 Supported Log Pages Log Page: May Support 00:11:05.561 Commands Supported & Effects Log Page: Not Supported 00:11:05.561 Feature Identifiers & Effects Log Page:May Support 00:11:05.561 NVMe-MI Commands & Effects Log Page: May Support 00:11:05.561 Data Area 4 for Telemetry Log: Not Supported 00:11:05.561 Error Log Page Entries Supported: 128 00:11:05.561 Keep Alive: Supported 00:11:05.561 Keep Alive Granularity: 10000 ms 00:11:05.561 00:11:05.561 NVM Command Set Attributes 00:11:05.561 ========================== 00:11:05.561 Submission Queue Entry Size 00:11:05.561 Max: 64 00:11:05.561 Min: 64 00:11:05.561 Completion Queue Entry Size 00:11:05.561 Max: 16 00:11:05.561 Min: 16 00:11:05.561 Number of Namespaces: 32 00:11:05.561 Compare Command: Supported 00:11:05.561 Write Uncorrectable Command: Not Supported 00:11:05.561 Dataset Management Command: Supported 00:11:05.561 Write Zeroes Command: Supported 00:11:05.561 Set Features Save Field: Not Supported 00:11:05.561 Reservations: Not Supported 00:11:05.561 Timestamp: Not Supported 00:11:05.561 Copy: Supported 00:11:05.561 Volatile Write Cache: Present 00:11:05.561 Atomic Write Unit (Normal): 1 00:11:05.561 Atomic Write Unit (PFail): 1 00:11:05.561 Atomic Compare & Write Unit: 1 00:11:05.561 Fused Compare & Write: Supported 00:11:05.561 Scatter-Gather List 00:11:05.561 SGL Command Set: Supported (Dword aligned) 00:11:05.561 SGL Keyed: Not Supported 00:11:05.561 SGL Bit Bucket Descriptor: Not Supported 00:11:05.561 SGL Metadata Pointer: Not Supported 00:11:05.561 Oversized SGL: Not Supported 00:11:05.561 SGL Metadata Address: Not Supported 00:11:05.561 SGL Offset: Not Supported 00:11:05.561 Transport SGL Data Block: Not Supported 00:11:05.561 Replay Protected Memory Block: Not Supported 00:11:05.561 00:11:05.561 Firmware Slot Information 00:11:05.561 ========================= 00:11:05.561 Active slot: 1 00:11:05.561 Slot 1 Firmware Revision: 24.05 00:11:05.561 00:11:05.561 00:11:05.561 Commands Supported and Effects 00:11:05.561 ============================== 00:11:05.561 Admin Commands 00:11:05.561 -------------- 00:11:05.561 Get Log Page (02h): Supported 00:11:05.561 Identify (06h): Supported 00:11:05.561 Abort (08h): Supported 00:11:05.561 Set Features (09h): Supported 00:11:05.561 Get Features (0Ah): Supported 00:11:05.561 Asynchronous Event Request (0Ch): Supported 00:11:05.561 Keep Alive (18h): Supported 00:11:05.561 I/O Commands 00:11:05.561 ------------ 00:11:05.561 Flush (00h): Supported LBA-Change 00:11:05.561 Write (01h): Supported LBA-Change 00:11:05.561 Read (02h): Supported 00:11:05.561 Compare (05h): Supported 00:11:05.561 Write Zeroes (08h): Supported LBA-Change 00:11:05.561 Dataset Management (09h): Supported LBA-Change 00:11:05.561 Copy (19h): Supported LBA-Change 00:11:05.561 Unknown (79h): Supported LBA-Change 00:11:05.561 Unknown (7Ah): Supported 00:11:05.561 00:11:05.561 Error Log 00:11:05.561 ========= 00:11:05.561 00:11:05.561 Arbitration 00:11:05.561 =========== 00:11:05.561 Arbitration Burst: 1 00:11:05.561 00:11:05.561 Power Management 00:11:05.561 ================ 00:11:05.561 Number of Power States: 1 00:11:05.561 Current Power State: Power State #0 00:11:05.561 Power State #0: 00:11:05.561 Max Power: 0.00 W 00:11:05.561 Non-Operational State: Operational 00:11:05.561 Entry Latency: Not Reported 00:11:05.561 Exit Latency: Not Reported 00:11:05.561 Relative Read Throughput: 0 00:11:05.561 Relative Read Latency: 0 00:11:05.561 Relative Write Throughput: 0 00:11:05.561 Relative Write Latency: 0 00:11:05.561 Idle Power: Not Reported 00:11:05.561 Active Power: Not Reported 00:11:05.561 Non-Operational Permissive Mode: Not Supported 00:11:05.561 00:11:05.561 Health Information 00:11:05.561 ================== 00:11:05.561 Critical Warnings: 00:11:05.561 Available Spare Space: OK 00:11:05.561 Temperature: OK 00:11:05.561 Device Reliability: OK 00:11:05.561 Read Only: No 00:11:05.561 Volatile Memory Backup: OK 00:11:05.561 Current Temperature: 0 Kelvin (-2[2024-05-15 03:05:36.592588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:05.561 [2024-05-15 03:05:36.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:05.561 [2024-05-15 03:05:36.600493] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:05.561 [2024-05-15 03:05:36.600504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.561 [2024-05-15 03:05:36.600509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.561 [2024-05-15 03:05:36.600514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.561 [2024-05-15 03:05:36.600520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.561 [2024-05-15 03:05:36.600573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:05.561 [2024-05-15 03:05:36.600583] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:05.561 [2024-05-15 03:05:36.601573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:05.561 [2024-05-15 03:05:36.601616] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:05.561 [2024-05-15 03:05:36.601622] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:05.561 [2024-05-15 03:05:36.602575] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:05.561 [2024-05-15 03:05:36.602586] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:05.561 [2024-05-15 03:05:36.602633] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:05.562 [2024-05-15 03:05:36.605471] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:05.562 73 Celsius) 00:11:05.562 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:05.562 Available Spare: 0% 00:11:05.562 Available Spare Threshold: 0% 00:11:05.562 Life Percentage Used: 0% 00:11:05.562 Data Units Read: 0 00:11:05.562 Data Units Written: 0 00:11:05.562 Host Read Commands: 0 00:11:05.562 Host Write Commands: 0 00:11:05.562 Controller Busy Time: 0 minutes 00:11:05.562 Power Cycles: 0 00:11:05.562 Power On Hours: 0 hours 00:11:05.562 Unsafe Shutdowns: 0 00:11:05.562 Unrecoverable Media Errors: 0 00:11:05.562 Lifetime Error Log Entries: 0 00:11:05.562 Warning Temperature Time: 0 minutes 00:11:05.562 Critical Temperature Time: 0 minutes 00:11:05.562 00:11:05.562 Number of Queues 00:11:05.562 ================ 00:11:05.562 Number of I/O Submission Queues: 127 00:11:05.562 Number of I/O Completion Queues: 127 00:11:05.562 00:11:05.562 Active Namespaces 00:11:05.562 ================= 00:11:05.562 Namespace ID:1 00:11:05.562 Error Recovery Timeout: Unlimited 00:11:05.562 Command Set Identifier: NVM (00h) 00:11:05.562 Deallocate: Supported 00:11:05.562 Deallocated/Unwritten Error: Not Supported 00:11:05.562 Deallocated Read Value: Unknown 00:11:05.562 Deallocate in Write Zeroes: Not Supported 00:11:05.562 Deallocated Guard Field: 0xFFFF 00:11:05.562 Flush: Supported 00:11:05.562 Reservation: Supported 00:11:05.562 Namespace Sharing Capabilities: Multiple Controllers 00:11:05.562 Size (in LBAs): 131072 (0GiB) 00:11:05.562 Capacity (in LBAs): 131072 (0GiB) 00:11:05.562 Utilization (in LBAs): 131072 (0GiB) 00:11:05.562 NGUID: F52F7BB4074349F099247F1AE574458F 00:11:05.562 UUID: f52f7bb4-0743-49f0-9924-7f1ae574458f 00:11:05.562 Thin Provisioning: Not Supported 00:11:05.562 Per-NS Atomic Units: Yes 00:11:05.562 Atomic Boundary Size (Normal): 0 00:11:05.562 Atomic Boundary Size (PFail): 0 00:11:05.562 Atomic Boundary Offset: 0 00:11:05.562 Maximum Single Source Range Length: 65535 00:11:05.562 Maximum Copy Length: 65535 00:11:05.562 Maximum Source Range Count: 1 00:11:05.562 NGUID/EUI64 Never Reused: No 00:11:05.562 Namespace Write Protected: No 00:11:05.562 Number of LBA Formats: 1 00:11:05.562 Current LBA Format: LBA Format #00 00:11:05.562 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:05.562 00:11:05.562 03:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:05.562 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.819 [2024-05-15 03:05:36.817835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:11.086 Initializing NVMe Controllers 00:11:11.086 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:11.086 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:11.086 Initialization complete. Launching workers. 00:11:11.086 ======================================================== 00:11:11.086 Latency(us) 00:11:11.086 Device Information : IOPS MiB/s Average min max 00:11:11.086 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.49 156.01 3204.60 953.79 6670.81 00:11:11.086 ======================================================== 00:11:11.086 Total : 39937.49 156.01 3204.60 953.79 6670.81 00:11:11.086 00:11:11.086 [2024-05-15 03:05:41.926729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:11.086 03:05:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:11.086 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.086 [2024-05-15 03:05:42.150503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:16.367 Initializing NVMe Controllers 00:11:16.367 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:16.367 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:16.367 Initialization complete. Launching workers. 00:11:16.367 ======================================================== 00:11:16.367 Latency(us) 00:11:16.367 Device Information : IOPS MiB/s Average min max 00:11:16.367 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.52 155.99 3204.84 984.07 7574.14 00:11:16.367 ======================================================== 00:11:16.367 Total : 39934.52 155.99 3204.84 984.07 7574.14 00:11:16.367 00:11:16.367 [2024-05-15 03:05:47.167651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:16.367 03:05:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:16.367 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.367 [2024-05-15 03:05:47.356024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:21.635 [2024-05-15 03:05:52.496558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:21.635 Initializing NVMe Controllers 00:11:21.635 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:21.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:21.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:21.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:21.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:21.635 Initialization complete. Launching workers. 00:11:21.635 Starting thread on core 2 00:11:21.635 Starting thread on core 3 00:11:21.635 Starting thread on core 1 00:11:21.635 03:05:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:21.635 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.635 [2024-05-15 03:05:52.774885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:24.922 [2024-05-15 03:05:55.841884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:24.922 Initializing NVMe Controllers 00:11:24.922 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:24.922 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:24.922 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:24.922 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:24.922 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:24.922 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:24.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:24.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:24.922 Initialization complete. Launching workers. 00:11:24.922 Starting thread on core 1 with urgent priority queue 00:11:24.922 Starting thread on core 2 with urgent priority queue 00:11:24.922 Starting thread on core 3 with urgent priority queue 00:11:24.922 Starting thread on core 0 with urgent priority queue 00:11:24.922 SPDK bdev Controller (SPDK2 ) core 0: 9496.00 IO/s 10.53 secs/100000 ios 00:11:24.922 SPDK bdev Controller (SPDK2 ) core 1: 8053.00 IO/s 12.42 secs/100000 ios 00:11:24.922 SPDK bdev Controller (SPDK2 ) core 2: 9295.67 IO/s 10.76 secs/100000 ios 00:11:24.922 SPDK bdev Controller (SPDK2 ) core 3: 9430.33 IO/s 10.60 secs/100000 ios 00:11:24.922 ======================================================== 00:11:24.922 00:11:24.922 03:05:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:24.922 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.181 [2024-05-15 03:05:56.103873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.181 Initializing NVMe Controllers 00:11:25.181 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.181 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.181 Namespace ID: 1 size: 0GB 00:11:25.181 Initialization complete. 00:11:25.181 INFO: using host memory buffer for IO 00:11:25.181 Hello world! 00:11:25.181 [2024-05-15 03:05:56.113935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.181 03:05:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:25.181 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.439 [2024-05-15 03:05:56.389452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:26.374 Initializing NVMe Controllers 00:11:26.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:26.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:26.374 Initialization complete. Launching workers. 00:11:26.374 submit (in ns) avg, min, max = 7696.7, 3254.8, 3998282.6 00:11:26.374 complete (in ns) avg, min, max = 19635.1, 1769.6, 3999381.7 00:11:26.374 00:11:26.374 Submit histogram 00:11:26.374 ================ 00:11:26.374 Range in us Cumulative Count 00:11:26.374 3.242 - 3.256: 0.0062% ( 1) 00:11:26.374 3.270 - 3.283: 0.0370% ( 5) 00:11:26.374 3.283 - 3.297: 0.1049% ( 11) 00:11:26.374 3.297 - 3.311: 0.3333% ( 37) 00:11:26.374 3.311 - 3.325: 0.6173% ( 46) 00:11:26.374 3.325 - 3.339: 1.1667% ( 89) 00:11:26.374 3.339 - 3.353: 2.2963% ( 183) 00:11:26.374 3.353 - 3.367: 6.0000% ( 600) 00:11:26.374 3.367 - 3.381: 11.1543% ( 835) 00:11:26.374 3.381 - 3.395: 17.5123% ( 1030) 00:11:26.374 3.395 - 3.409: 24.0617% ( 1061) 00:11:26.374 3.409 - 3.423: 30.1296% ( 983) 00:11:26.374 3.423 - 3.437: 35.2778% ( 834) 00:11:26.374 3.437 - 3.450: 40.2840% ( 811) 00:11:26.374 3.450 - 3.464: 45.0000% ( 764) 00:11:26.374 3.464 - 3.478: 49.1358% ( 670) 00:11:26.374 3.478 - 3.492: 53.1728% ( 654) 00:11:26.374 3.492 - 3.506: 57.9753% ( 778) 00:11:26.374 3.506 - 3.520: 65.2160% ( 1173) 00:11:26.374 3.520 - 3.534: 69.9074% ( 760) 00:11:26.374 3.534 - 3.548: 74.5000% ( 744) 00:11:26.374 3.548 - 3.562: 79.6296% ( 831) 00:11:26.374 3.562 - 3.590: 85.7654% ( 994) 00:11:26.374 3.590 - 3.617: 87.5062% ( 282) 00:11:26.374 3.617 - 3.645: 88.3951% ( 144) 00:11:26.374 3.645 - 3.673: 89.5802% ( 192) 00:11:26.374 3.673 - 3.701: 91.2593% ( 272) 00:11:26.374 3.701 - 3.729: 92.8333% ( 255) 00:11:26.374 3.729 - 3.757: 94.3580% ( 247) 00:11:26.374 3.757 - 3.784: 96.0000% ( 266) 00:11:26.374 3.784 - 3.812: 97.4815% ( 240) 00:11:26.374 3.812 - 3.840: 98.3333% ( 138) 00:11:26.374 3.840 - 3.868: 98.8827% ( 89) 00:11:26.374 3.868 - 3.896: 99.2778% ( 64) 00:11:26.374 3.896 - 3.923: 99.5185% ( 39) 00:11:26.374 3.923 - 3.951: 99.5926% ( 12) 00:11:26.374 3.951 - 3.979: 99.6296% ( 6) 00:11:26.374 3.979 - 4.007: 99.6420% ( 2) 00:11:26.374 4.035 - 4.063: 99.6481% ( 1) 00:11:26.374 4.063 - 4.090: 99.6543% ( 1) 00:11:26.375 4.090 - 4.118: 99.6605% ( 1) 00:11:26.375 4.146 - 4.174: 99.6667% ( 1) 00:11:26.375 4.202 - 4.230: 99.6728% ( 1) 00:11:26.375 5.148 - 5.176: 99.6790% ( 1) 00:11:26.375 5.203 - 5.231: 99.6852% ( 1) 00:11:26.375 5.398 - 5.426: 99.6914% ( 1) 00:11:26.375 5.426 - 5.454: 99.6975% ( 1) 00:11:26.375 5.537 - 5.565: 99.7037% ( 1) 00:11:26.375 5.621 - 5.649: 99.7099% ( 1) 00:11:26.375 5.649 - 5.677: 99.7160% ( 1) 00:11:26.375 5.732 - 5.760: 99.7284% ( 2) 00:11:26.375 5.816 - 5.843: 99.7346% ( 1) 00:11:26.375 5.871 - 5.899: 99.7407% ( 1) 00:11:26.375 5.899 - 5.927: 99.7469% ( 1) 00:11:26.375 5.983 - 6.010: 99.7531% ( 1) 00:11:26.375 6.038 - 6.066: 99.7593% ( 1) 00:11:26.375 6.066 - 6.094: 99.7654% ( 1) 00:11:26.375 6.122 - 6.150: 99.7716% ( 1) 00:11:26.375 6.177 - 6.205: 99.7778% ( 1) 00:11:26.375 6.261 - 6.289: 99.7901% ( 2) 00:11:26.375 6.400 - 6.428: 99.7963% ( 1) 00:11:26.375 6.428 - 6.456: 99.8025% ( 1) 00:11:26.375 6.456 - 6.483: 99.8086% ( 1) 00:11:26.375 6.623 - 6.650: 99.8148% ( 1) 00:11:26.375 6.678 - 6.706: 99.8210% ( 1) 00:11:26.375 6.734 - 6.762: 99.8272% ( 1) 00:11:26.375 6.817 - 6.845: 99.8333% ( 1) 00:11:26.375 6.873 - 6.901: 99.8395% ( 1) 00:11:26.375 6.901 - 6.929: 99.8457% ( 1) 00:11:26.375 7.012 - 7.040: 99.8519% ( 1) 00:11:26.375 7.068 - 7.096: 99.8580% ( 1) 00:11:26.375 7.179 - 7.235: 99.8642% ( 1) 00:11:26.375 7.235 - 7.290: 99.8704% ( 1) 00:11:26.375 7.402 - 7.457: 99.8765% ( 1) 00:11:26.375 11.019 - 11.075: 99.8827% ( 1) 00:11:26.375 11.576 - 11.631: 99.8889% ( 1) 00:11:26.375 13.468 - 13.523: 99.8951% ( 1) 00:11:26.375 3989.148 - 4017.642: 100.0000% ( 17) 00:11:26.375 00:11:26.375 Complete histogram 00:11:26.375 ================== 00:11:26.375 Ra[2024-05-15 03:05:57.482486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:26.375 nge in us Cumulative Count 00:11:26.375 1.767 - 1.774: 0.0062% ( 1) 00:11:26.375 1.774 - 1.781: 0.0494% ( 7) 00:11:26.375 1.781 - 1.795: 0.1790% ( 21) 00:11:26.375 1.795 - 1.809: 0.2284% ( 8) 00:11:26.375 1.809 - 1.823: 0.2716% ( 7) 00:11:26.375 1.823 - 1.837: 7.2778% ( 1135) 00:11:26.375 1.837 - 1.850: 37.2716% ( 4859) 00:11:26.375 1.850 - 1.864: 45.2037% ( 1285) 00:11:26.375 1.864 - 1.878: 47.3272% ( 344) 00:11:26.375 1.878 - 1.892: 52.0926% ( 772) 00:11:26.375 1.892 - 1.906: 81.0926% ( 4698) 00:11:26.375 1.906 - 1.920: 94.3642% ( 2150) 00:11:26.375 1.920 - 1.934: 96.5679% ( 357) 00:11:26.375 1.934 - 1.948: 97.5802% ( 164) 00:11:26.375 1.948 - 1.962: 97.9568% ( 61) 00:11:26.375 1.962 - 1.976: 98.3765% ( 68) 00:11:26.375 1.976 - 1.990: 98.8704% ( 80) 00:11:26.375 1.990 - 2.003: 99.0864% ( 35) 00:11:26.375 2.003 - 2.017: 99.1481% ( 10) 00:11:26.375 2.017 - 2.031: 99.2037% ( 9) 00:11:26.375 2.031 - 2.045: 99.2160% ( 2) 00:11:26.375 2.045 - 2.059: 99.2222% ( 1) 00:11:26.375 2.059 - 2.073: 99.2531% ( 5) 00:11:26.375 2.073 - 2.087: 99.2840% ( 5) 00:11:26.375 2.087 - 2.101: 99.3086% ( 4) 00:11:26.375 2.101 - 2.115: 99.3210% ( 2) 00:11:26.375 2.157 - 2.170: 99.3272% ( 1) 00:11:26.375 2.170 - 2.184: 99.3333% ( 1) 00:11:26.375 2.212 - 2.226: 99.3457% ( 2) 00:11:26.375 2.226 - 2.240: 99.3519% ( 1) 00:11:26.375 2.254 - 2.268: 99.3580% ( 1) 00:11:26.375 2.282 - 2.296: 99.3642% ( 1) 00:11:26.375 2.296 - 2.310: 99.3765% ( 2) 00:11:26.375 2.310 - 2.323: 99.3889% ( 2) 00:11:26.375 2.379 - 2.393: 99.3951% ( 1) 00:11:26.375 2.407 - 2.421: 99.4012% ( 1) 00:11:26.375 2.922 - 2.936: 99.4074% ( 1) 00:11:26.375 3.951 - 3.979: 99.4321% ( 4) 00:11:26.375 4.007 - 4.035: 99.4383% ( 1) 00:11:26.375 4.035 - 4.063: 99.4444% ( 1) 00:11:26.375 4.090 - 4.118: 99.4506% ( 1) 00:11:26.375 4.202 - 4.230: 99.4630% ( 2) 00:11:26.375 4.313 - 4.341: 99.4691% ( 1) 00:11:26.375 4.563 - 4.591: 99.4753% ( 1) 00:11:26.375 4.591 - 4.619: 99.4877% ( 2) 00:11:26.375 4.730 - 4.758: 99.4938% ( 1) 00:11:26.375 4.758 - 4.786: 99.5000% ( 1) 00:11:26.375 4.786 - 4.814: 99.5123% ( 2) 00:11:26.375 5.037 - 5.064: 99.5185% ( 1) 00:11:26.375 5.259 - 5.287: 99.5247% ( 1) 00:11:26.375 5.732 - 5.760: 99.5309% ( 1) 00:11:26.375 6.066 - 6.094: 99.5370% ( 1) 00:11:26.375 11.631 - 11.687: 99.5432% ( 1) 00:11:26.375 13.969 - 14.024: 99.5494% ( 1) 00:11:26.375 39.847 - 40.070: 99.5556% ( 1) 00:11:26.375 3989.148 - 4017.642: 100.0000% ( 72) 00:11:26.375 00:11:26.375 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:26.375 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:26.375 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:26.375 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:26.375 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:26.633 [ 00:11:26.633 { 00:11:26.633 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.633 "subtype": "Discovery", 00:11:26.633 "listen_addresses": [], 00:11:26.633 "allow_any_host": true, 00:11:26.633 "hosts": [] 00:11:26.633 }, 00:11:26.633 { 00:11:26.633 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:26.633 "subtype": "NVMe", 00:11:26.633 "listen_addresses": [ 00:11:26.633 { 00:11:26.633 "trtype": "VFIOUSER", 00:11:26.633 "adrfam": "IPv4", 00:11:26.633 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:26.633 "trsvcid": "0" 00:11:26.633 } 00:11:26.633 ], 00:11:26.633 "allow_any_host": true, 00:11:26.633 "hosts": [], 00:11:26.633 "serial_number": "SPDK1", 00:11:26.633 "model_number": "SPDK bdev Controller", 00:11:26.633 "max_namespaces": 32, 00:11:26.633 "min_cntlid": 1, 00:11:26.633 "max_cntlid": 65519, 00:11:26.633 "namespaces": [ 00:11:26.633 { 00:11:26.633 "nsid": 1, 00:11:26.633 "bdev_name": "Malloc1", 00:11:26.633 "name": "Malloc1", 00:11:26.633 "nguid": "55BEEB6CB0BC49FFB9FCF26AF107C84D", 00:11:26.634 "uuid": "55beeb6c-b0bc-49ff-b9fc-f26af107c84d" 00:11:26.634 }, 00:11:26.634 { 00:11:26.634 "nsid": 2, 00:11:26.634 "bdev_name": "Malloc3", 00:11:26.634 "name": "Malloc3", 00:11:26.634 "nguid": "7A2DFA2B285649309D993D8E48FE4E9B", 00:11:26.634 "uuid": "7a2dfa2b-2856-4930-9d99-3d8e48fe4e9b" 00:11:26.634 } 00:11:26.634 ] 00:11:26.634 }, 00:11:26.634 { 00:11:26.634 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:26.634 "subtype": "NVMe", 00:11:26.634 "listen_addresses": [ 00:11:26.634 { 00:11:26.634 "trtype": "VFIOUSER", 00:11:26.634 "adrfam": "IPv4", 00:11:26.634 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:26.634 "trsvcid": "0" 00:11:26.634 } 00:11:26.634 ], 00:11:26.634 "allow_any_host": true, 00:11:26.634 "hosts": [], 00:11:26.634 "serial_number": "SPDK2", 00:11:26.634 "model_number": "SPDK bdev Controller", 00:11:26.634 "max_namespaces": 32, 00:11:26.634 "min_cntlid": 1, 00:11:26.634 "max_cntlid": 65519, 00:11:26.634 "namespaces": [ 00:11:26.634 { 00:11:26.634 "nsid": 1, 00:11:26.634 "bdev_name": "Malloc2", 00:11:26.634 "name": "Malloc2", 00:11:26.634 "nguid": "F52F7BB4074349F099247F1AE574458F", 00:11:26.634 "uuid": "f52f7bb4-0743-49f0-9924-7f1ae574458f" 00:11:26.634 } 00:11:26.634 ] 00:11:26.634 } 00:11:26.634 ] 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=968180 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:26.634 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.892 [2024-05-15 03:05:57.862858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:26.892 Malloc4 00:11:26.892 03:05:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:27.151 [2024-05-15 03:05:58.073287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:27.151 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:27.151 Asynchronous Event Request test 00:11:27.151 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:27.151 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:27.151 Registering asynchronous event callbacks... 00:11:27.151 Starting namespace attribute notice tests for all controllers... 00:11:27.151 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:27.151 aer_cb - Changed Namespace 00:11:27.151 Cleaning up... 00:11:27.151 [ 00:11:27.151 { 00:11:27.151 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:27.151 "subtype": "Discovery", 00:11:27.151 "listen_addresses": [], 00:11:27.151 "allow_any_host": true, 00:11:27.151 "hosts": [] 00:11:27.151 }, 00:11:27.151 { 00:11:27.151 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:27.151 "subtype": "NVMe", 00:11:27.151 "listen_addresses": [ 00:11:27.151 { 00:11:27.151 "trtype": "VFIOUSER", 00:11:27.151 "adrfam": "IPv4", 00:11:27.151 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:27.151 "trsvcid": "0" 00:11:27.151 } 00:11:27.151 ], 00:11:27.151 "allow_any_host": true, 00:11:27.151 "hosts": [], 00:11:27.151 "serial_number": "SPDK1", 00:11:27.151 "model_number": "SPDK bdev Controller", 00:11:27.151 "max_namespaces": 32, 00:11:27.151 "min_cntlid": 1, 00:11:27.151 "max_cntlid": 65519, 00:11:27.151 "namespaces": [ 00:11:27.151 { 00:11:27.151 "nsid": 1, 00:11:27.151 "bdev_name": "Malloc1", 00:11:27.151 "name": "Malloc1", 00:11:27.151 "nguid": "55BEEB6CB0BC49FFB9FCF26AF107C84D", 00:11:27.151 "uuid": "55beeb6c-b0bc-49ff-b9fc-f26af107c84d" 00:11:27.151 }, 00:11:27.151 { 00:11:27.151 "nsid": 2, 00:11:27.151 "bdev_name": "Malloc3", 00:11:27.151 "name": "Malloc3", 00:11:27.151 "nguid": "7A2DFA2B285649309D993D8E48FE4E9B", 00:11:27.151 "uuid": "7a2dfa2b-2856-4930-9d99-3d8e48fe4e9b" 00:11:27.151 } 00:11:27.151 ] 00:11:27.151 }, 00:11:27.151 { 00:11:27.151 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:27.151 "subtype": "NVMe", 00:11:27.151 "listen_addresses": [ 00:11:27.151 { 00:11:27.151 "trtype": "VFIOUSER", 00:11:27.151 "adrfam": "IPv4", 00:11:27.151 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:27.151 "trsvcid": "0" 00:11:27.151 } 00:11:27.151 ], 00:11:27.151 "allow_any_host": true, 00:11:27.151 "hosts": [], 00:11:27.151 "serial_number": "SPDK2", 00:11:27.151 "model_number": "SPDK bdev Controller", 00:11:27.151 "max_namespaces": 32, 00:11:27.151 "min_cntlid": 1, 00:11:27.151 "max_cntlid": 65519, 00:11:27.151 "namespaces": [ 00:11:27.151 { 00:11:27.151 "nsid": 1, 00:11:27.151 "bdev_name": "Malloc2", 00:11:27.151 "name": "Malloc2", 00:11:27.151 "nguid": "F52F7BB4074349F099247F1AE574458F", 00:11:27.151 "uuid": "f52f7bb4-0743-49f0-9924-7f1ae574458f" 00:11:27.151 }, 00:11:27.151 { 00:11:27.151 "nsid": 2, 00:11:27.151 "bdev_name": "Malloc4", 00:11:27.151 "name": "Malloc4", 00:11:27.151 "nguid": "5FF3CBA12F99466FB3B9CDE63F851709", 00:11:27.151 "uuid": "5ff3cba1-2f99-466f-b3b9-cde63f851709" 00:11:27.151 } 00:11:27.151 ] 00:11:27.151 } 00:11:27.151 ] 00:11:27.151 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 968180 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 960440 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 960440 ']' 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 960440 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:27.152 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 960440 00:11:27.411 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:27.411 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:27.411 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 960440' 00:11:27.411 killing process with pid 960440 00:11:27.411 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 960440 00:11:27.411 [2024-05-15 03:05:58.326379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:27.411 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 960440 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=968313 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 968313' 00:11:27.672 Process pid: 968313 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:27.672 03:05:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 968313 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 968313 ']' 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:27.673 03:05:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:27.673 [2024-05-15 03:05:58.661364] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:27.673 [2024-05-15 03:05:58.662186] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:27.673 [2024-05-15 03:05:58.662223] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.673 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.673 [2024-05-15 03:05:58.718344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.673 [2024-05-15 03:05:58.798423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.673 [2024-05-15 03:05:58.798461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.673 [2024-05-15 03:05:58.798470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.673 [2024-05-15 03:05:58.798476] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.673 [2024-05-15 03:05:58.798481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.673 [2024-05-15 03:05:58.798527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.673 [2024-05-15 03:05:58.798634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.673 [2024-05-15 03:05:58.798719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.673 [2024-05-15 03:05:58.798720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.934 [2024-05-15 03:05:58.876332] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:27.934 [2024-05-15 03:05:58.876425] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:27.934 [2024-05-15 03:05:58.876670] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:27.934 [2024-05-15 03:05:58.877022] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:27.934 [2024-05-15 03:05:58.877275] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:28.503 03:05:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:28.503 03:05:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:11:28.503 03:05:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:29.512 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:29.781 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:29.781 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:29.781 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:29.781 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:29.781 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:29.781 Malloc1 00:11:29.782 03:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:30.041 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:30.300 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:30.300 [2024-05-15 03:06:01.371101] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:30.300 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:30.300 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:30.300 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:30.558 Malloc2 00:11:30.558 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:30.816 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:30.816 03:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 968313 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 968313 ']' 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 968313 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 968313 00:11:31.075 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:31.076 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:31.076 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 968313' 00:11:31.076 killing process with pid 968313 00:11:31.076 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 968313 00:11:31.076 [2024-05-15 03:06:02.196819] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:31.076 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 968313 00:11:31.335 03:06:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:31.335 03:06:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.335 00:11:31.335 real 0m51.353s 00:11:31.335 user 3m23.187s 00:11:31.335 sys 0m3.593s 00:11:31.335 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.335 03:06:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:31.335 ************************************ 00:11:31.335 END TEST nvmf_vfio_user 00:11:31.335 ************************************ 00:11:31.335 03:06:02 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:31.335 03:06:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:31.335 03:06:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:31.335 03:06:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.595 ************************************ 00:11:31.595 START TEST nvmf_vfio_user_nvme_compliance 00:11:31.595 ************************************ 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:31.595 * Looking for test storage... 00:11:31.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=969195 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 969195' 00:11:31.595 Process pid: 969195 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 969195 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 969195 ']' 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:31.595 03:06:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:31.595 [2024-05-15 03:06:02.676384] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:11:31.595 [2024-05-15 03:06:02.676434] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.595 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.595 [2024-05-15 03:06:02.730339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.854 [2024-05-15 03:06:02.805498] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.854 [2024-05-15 03:06:02.805539] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.854 [2024-05-15 03:06:02.805546] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.854 [2024-05-15 03:06:02.805552] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.854 [2024-05-15 03:06:02.805560] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.854 [2024-05-15 03:06:02.805621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.854 [2024-05-15 03:06:02.805714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.854 [2024-05-15 03:06:02.805716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.421 03:06:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:32.422 03:06:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:11:32.422 03:06:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.357 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 malloc0 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:33.616 [2024-05-15 03:06:04.556375] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.616 03:06:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:33.616 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.616 00:11:33.616 00:11:33.616 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.616 http://cunit.sourceforge.net/ 00:11:33.616 00:11:33.616 00:11:33.616 Suite: nvme_compliance 00:11:33.616 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 03:06:04.706897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.616 [2024-05-15 03:06:04.708229] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:33.616 [2024-05-15 03:06:04.708246] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:33.616 [2024-05-15 03:06:04.708252] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:33.616 [2024-05-15 03:06:04.709915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.616 passed 00:11:33.875 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 03:06:04.789479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.875 [2024-05-15 03:06:04.795513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.875 passed 00:11:33.875 Test: admin_identify_ns ...[2024-05-15 03:06:04.872502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.875 [2024-05-15 03:06:04.933475] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:33.875 [2024-05-15 03:06:04.941476] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:33.875 [2024-05-15 03:06:04.962570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.875 passed 00:11:33.875 Test: admin_get_features_mandatory_features ...[2024-05-15 03:06:05.036868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.134 [2024-05-15 03:06:05.039893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.134 passed 00:11:34.134 Test: admin_get_features_optional_features ...[2024-05-15 03:06:05.118405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.134 [2024-05-15 03:06:05.121428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.134 passed 00:11:34.134 Test: admin_set_features_number_of_queues ...[2024-05-15 03:06:05.199414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.393 [2024-05-15 03:06:05.305557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.393 passed 00:11:34.393 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 03:06:05.380846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.393 [2024-05-15 03:06:05.383865] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.393 passed 00:11:34.393 Test: admin_get_log_page_with_lpo ...[2024-05-15 03:06:05.461754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.393 [2024-05-15 03:06:05.530475] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:34.393 [2024-05-15 03:06:05.543534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.650 passed 00:11:34.650 Test: fabric_property_get ...[2024-05-15 03:06:05.620877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.650 [2024-05-15 03:06:05.622100] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:34.650 [2024-05-15 03:06:05.623894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.650 passed 00:11:34.650 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 03:06:05.704413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.650 [2024-05-15 03:06:05.705639] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:34.650 [2024-05-15 03:06:05.707439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.650 passed 00:11:34.650 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 03:06:05.786419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.907 [2024-05-15 03:06:05.872474] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:34.907 [2024-05-15 03:06:05.888472] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:34.907 [2024-05-15 03:06:05.893553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.907 passed 00:11:34.907 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 03:06:05.967818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.907 [2024-05-15 03:06:05.969048] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:34.907 [2024-05-15 03:06:05.970837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.907 passed 00:11:34.907 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 03:06:06.048750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.164 [2024-05-15 03:06:06.125474] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:35.164 [2024-05-15 03:06:06.149480] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:35.164 [2024-05-15 03:06:06.154554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.164 passed 00:11:35.164 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 03:06:06.232576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.165 [2024-05-15 03:06:06.233786] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:35.165 [2024-05-15 03:06:06.233807] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:35.165 [2024-05-15 03:06:06.236606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.165 passed 00:11:35.165 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 03:06:06.314915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.423 [2024-05-15 03:06:06.406471] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:35.423 [2024-05-15 03:06:06.414482] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:35.423 [2024-05-15 03:06:06.422472] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:35.423 [2024-05-15 03:06:06.428481] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:35.423 [2024-05-15 03:06:06.458557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.423 passed 00:11:35.423 Test: admin_create_io_sq_verify_pc ...[2024-05-15 03:06:06.533656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.423 [2024-05-15 03:06:06.552481] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:35.423 [2024-05-15 03:06:06.569831] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.682 passed 00:11:35.682 Test: admin_create_io_qp_max_qps ...[2024-05-15 03:06:06.649409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:36.617 [2024-05-15 03:06:07.743921] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:37.183 [2024-05-15 03:06:08.132484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:37.183 passed 00:11:37.183 Test: admin_create_io_sq_shared_cq ...[2024-05-15 03:06:08.209620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:37.183 [2024-05-15 03:06:08.339473] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:37.442 [2024-05-15 03:06:08.376536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:37.442 passed 00:11:37.442 00:11:37.442 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.442 suites 1 1 n/a 0 0 00:11:37.442 tests 18 18 18 0 0 00:11:37.442 asserts 360 360 360 0 n/a 00:11:37.442 00:11:37.442 Elapsed time = 1.510 seconds 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 969195 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 969195 ']' 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 969195 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 969195 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 969195' 00:11:37.442 killing process with pid 969195 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 969195 00:11:37.442 [2024-05-15 03:06:08.464621] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:37.442 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 969195 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:37.701 00:11:37.701 real 0m6.180s 00:11:37.701 user 0m17.692s 00:11:37.701 sys 0m0.438s 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:37.701 ************************************ 00:11:37.701 END TEST nvmf_vfio_user_nvme_compliance 00:11:37.701 ************************************ 00:11:37.701 03:06:08 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:37.701 03:06:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:37.701 03:06:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.701 03:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.701 ************************************ 00:11:37.701 START TEST nvmf_vfio_user_fuzz 00:11:37.701 ************************************ 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:37.701 * Looking for test storage... 00:11:37.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.701 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=970722 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 970722' 00:11:37.702 Process pid: 970722 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 970722 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 970722 ']' 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:37.702 03:06:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.638 03:06:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:38.638 03:06:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:11:38.638 03:06:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.573 malloc0 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.573 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.832 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:39.832 03:06:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:11.915 Fuzzing completed. Shutting down the fuzz application 00:12:11.915 00:12:11.915 Dumping successful admin opcodes: 00:12:11.915 8, 9, 10, 24, 00:12:11.915 Dumping successful io opcodes: 00:12:11.915 0, 00:12:11.915 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1000998, total successful commands: 3916, random_seed: 706449344 00:12:11.915 NS: 0x200003a1ef00 admin qp, Total commands completed: 244981, total successful commands: 1975, random_seed: 1514910464 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 970722 ']' 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 970722' 00:12:11.915 killing process with pid 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 970722 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:11.915 00:12:11.915 real 0m32.761s 00:12:11.915 user 0m31.413s 00:12:11.915 sys 0m29.599s 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.915 03:06:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.915 ************************************ 00:12:11.915 END TEST nvmf_vfio_user_fuzz 00:12:11.915 ************************************ 00:12:11.915 03:06:41 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:11.915 03:06:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:11.915 03:06:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.915 03:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.915 ************************************ 00:12:11.915 START TEST nvmf_host_management 00:12:11.915 ************************************ 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:11.915 * Looking for test storage... 00:12:11.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.915 03:06:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.916 03:06:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:16.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:16.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:16.131 Found net devices under 0000:86:00.0: cvl_0_0 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:16.131 Found net devices under 0000:86:00.1: cvl_0_1 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.131 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:16.132 00:12:16.132 --- 10.0.0.2 ping statistics --- 00:12:16.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.132 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:12:16.132 00:12:16.132 --- 10.0.0.1 ping statistics --- 00:12:16.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.132 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=979089 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 979089 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 979089 ']' 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:16.132 03:06:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.132 [2024-05-15 03:06:47.023692] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:16.132 [2024-05-15 03:06:47.023734] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.132 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.132 [2024-05-15 03:06:47.081076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.132 [2024-05-15 03:06:47.155924] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.132 [2024-05-15 03:06:47.155969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.132 [2024-05-15 03:06:47.155976] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.132 [2024-05-15 03:06:47.155982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.132 [2024-05-15 03:06:47.155988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.132 [2024-05-15 03:06:47.156090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.132 [2024-05-15 03:06:47.156179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.132 [2024-05-15 03:06:47.156213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.132 [2024-05-15 03:06:47.156214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.700 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.700 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:16.700 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.700 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.700 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 [2024-05-15 03:06:47.874439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.959 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 Malloc0 00:12:16.959 [2024-05-15 03:06:47.934116] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:16.959 [2024-05-15 03:06:47.934360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=979360 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 979360 /var/tmp/bdevperf.sock 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 979360 ']' 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:16.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:16.960 { 00:12:16.960 "params": { 00:12:16.960 "name": "Nvme$subsystem", 00:12:16.960 "trtype": "$TEST_TRANSPORT", 00:12:16.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.960 "adrfam": "ipv4", 00:12:16.960 "trsvcid": "$NVMF_PORT", 00:12:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.960 "hdgst": ${hdgst:-false}, 00:12:16.960 "ddgst": ${ddgst:-false} 00:12:16.960 }, 00:12:16.960 "method": "bdev_nvme_attach_controller" 00:12:16.960 } 00:12:16.960 EOF 00:12:16.960 )") 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:16.960 03:06:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:16.960 "params": { 00:12:16.960 "name": "Nvme0", 00:12:16.960 "trtype": "tcp", 00:12:16.960 "traddr": "10.0.0.2", 00:12:16.960 "adrfam": "ipv4", 00:12:16.960 "trsvcid": "4420", 00:12:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:16.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:16.960 "hdgst": false, 00:12:16.960 "ddgst": false 00:12:16.960 }, 00:12:16.960 "method": "bdev_nvme_attach_controller" 00:12:16.960 }' 00:12:16.960 [2024-05-15 03:06:48.026052] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:16.960 [2024-05-15 03:06:48.026094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979360 ] 00:12:16.960 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.960 [2024-05-15 03:06:48.079993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.219 [2024-05-15 03:06:48.152947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.478 Running I/O for 10 seconds... 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=724 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 724 -ge 100 ']' 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.028 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.028 [2024-05-15 03:06:48.917573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f100 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.917635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f100 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.917643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f100 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.917650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f100 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.917656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f100 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.920769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.028 [2024-05-15 03:06:48.920805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.028 [2024-05-15 03:06:48.920823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.028 [2024-05-15 03:06:48.920837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.028 [2024-05-15 03:06:48.920852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2365840 is same with the state(5) to be set 00:12:18.028 [2024-05-15 03:06:48.920895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.920986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.920994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.028 [2024-05-15 03:06:48.921157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.028 [2024-05-15 03:06:48.921165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.029 [2024-05-15 03:06:48.921839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.029 [2024-05-15 03:06:48.921848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.921971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.030 [2024-05-15 03:06:48.921979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.922041] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2776620 was disconnected and freed. reset controller. 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:18.030 [2024-05-15 03:06:48.922944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.030 task offset: 106496 on job bdev=Nvme0n1 fails 00:12:18.030 00:12:18.030 Latency(us) 00:12:18.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.030 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:18.030 Job: Nvme0n1 ended in about 0.45 seconds with error 00:12:18.030 Verification LBA range: start 0x0 length 0x400 00:12:18.030 Nvme0n1 : 0.45 1840.35 115.02 141.57 0.00 31488.74 1745.25 27582.11 00:12:18.030 =================================================================================================================== 00:12:18.030 Total : 1840.35 115.02 141.57 0.00 31488.74 1745.25 27582.11 00:12:18.030 [2024-05-15 03:06:48.924538] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.030 [2024-05-15 03:06:48.924555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2365840 (9): Bad file descriptor 00:12:18.030 [2024-05-15 03:06:48.927232] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:18.030 [2024-05-15 03:06:48.927308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:18.030 [2024-05-15 03:06:48.927333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.030 [2024-05-15 03:06:48.927346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:18.030 [2024-05-15 03:06:48.927355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:18.030 [2024-05-15 03:06:48.927363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:18.030 [2024-05-15 03:06:48.927371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2365840 00:12:18.030 [2024-05-15 03:06:48.927390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2365840 (9): Bad file descriptor 00:12:18.030 [2024-05-15 03:06:48.927402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:18.030 [2024-05-15 03:06:48.927410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:18.030 [2024-05-15 03:06:48.927419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:18.030 [2024-05-15 03:06:48.927433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.030 03:06:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 979360 00:12:18.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (979360) - No such process 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:18.992 { 00:12:18.992 "params": { 00:12:18.992 "name": "Nvme$subsystem", 00:12:18.992 "trtype": "$TEST_TRANSPORT", 00:12:18.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:18.992 "adrfam": "ipv4", 00:12:18.992 "trsvcid": "$NVMF_PORT", 00:12:18.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:18.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:18.992 "hdgst": ${hdgst:-false}, 00:12:18.992 "ddgst": ${ddgst:-false} 00:12:18.992 }, 00:12:18.992 "method": "bdev_nvme_attach_controller" 00:12:18.992 } 00:12:18.992 EOF 00:12:18.992 )") 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:18.992 03:06:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:18.992 "params": { 00:12:18.992 "name": "Nvme0", 00:12:18.992 "trtype": "tcp", 00:12:18.992 "traddr": "10.0.0.2", 00:12:18.992 "adrfam": "ipv4", 00:12:18.992 "trsvcid": "4420", 00:12:18.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:18.992 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:18.992 "hdgst": false, 00:12:18.992 "ddgst": false 00:12:18.992 }, 00:12:18.992 "method": "bdev_nvme_attach_controller" 00:12:18.992 }' 00:12:18.992 [2024-05-15 03:06:49.985242] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:18.992 [2024-05-15 03:06:49.985292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979698 ] 00:12:18.992 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.992 [2024-05-15 03:06:50.041284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.992 [2024-05-15 03:06:50.120832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.252 Running I/O for 1 seconds... 00:12:20.190 00:12:20.190 Latency(us) 00:12:20.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.190 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:20.190 Verification LBA range: start 0x0 length 0x400 00:12:20.190 Nvme0n1 : 1.00 1914.01 119.63 0.00 0.00 32916.15 7009.50 27240.18 00:12:20.190 =================================================================================================================== 00:12:20.190 Total : 1914.01 119.63 0.00 0.00 32916.15 7009.50 27240.18 00:12:20.451 03:06:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:20.451 03:06:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:20.451 03:06:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:20.451 03:06:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.452 rmmod nvme_tcp 00:12:20.452 rmmod nvme_fabrics 00:12:20.452 rmmod nvme_keyring 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 979089 ']' 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 979089 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 979089 ']' 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 979089 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 979089 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 979089' 00:12:20.452 killing process with pid 979089 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 979089 00:12:20.452 [2024-05-15 03:06:51.602178] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:20.452 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 979089 00:12:20.711 [2024-05-15 03:06:51.810744] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.711 03:06:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.245 03:06:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.245 03:06:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:23.245 00:12:23.245 real 0m12.309s 00:12:23.245 user 0m22.473s 00:12:23.245 sys 0m5.080s 00:12:23.245 03:06:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.245 03:06:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:23.245 ************************************ 00:12:23.245 END TEST nvmf_host_management 00:12:23.245 ************************************ 00:12:23.245 03:06:53 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:23.245 03:06:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:23.245 03:06:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.245 03:06:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.245 ************************************ 00:12:23.245 START TEST nvmf_lvol 00:12:23.245 ************************************ 00:12:23.245 03:06:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:23.245 * Looking for test storage... 00:12:23.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.245 03:06:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:28.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:28.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:28.516 Found net devices under 0000:86:00.0: cvl_0_0 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:28.516 Found net devices under 0000:86:00.1: cvl_0_1 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:12:28.516 00:12:28.516 --- 10.0.0.2 ping statistics --- 00:12:28.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.516 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:12:28.516 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:12:28.516 00:12:28.516 --- 10.0.0.1 ping statistics --- 00:12:28.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.517 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=983376 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 983376 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 983376 ']' 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.517 03:06:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.517 [2024-05-15 03:06:59.408422] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:28.517 [2024-05-15 03:06:59.408462] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.517 [2024-05-15 03:06:59.466552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:28.517 [2024-05-15 03:06:59.538955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.517 [2024-05-15 03:06:59.538996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.517 [2024-05-15 03:06:59.539007] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.517 [2024-05-15 03:06:59.539012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.517 [2024-05-15 03:06:59.539017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.517 [2024-05-15 03:06:59.539064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.517 [2024-05-15 03:06:59.539161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.517 [2024-05-15 03:06:59.539163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.083 03:07:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.083 03:07:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:12:29.083 03:07:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.083 03:07:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.083 03:07:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:29.382 03:07:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.382 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:29.382 [2024-05-15 03:07:00.404410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.382 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:29.641 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:29.641 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:29.898 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:29.898 03:07:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:29.898 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:30.156 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7bcc2932-d6bd-43b1-bcca-b37007425327 00:12:30.156 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7bcc2932-d6bd-43b1-bcca-b37007425327 lvol 20 00:12:30.414 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=205bf9dc-c941-44fa-991c-8c34e825fa17 00:12:30.414 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:30.672 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 205bf9dc-c941-44fa-991c-8c34e825fa17 00:12:30.672 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:30.931 [2024-05-15 03:07:01.905699] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:30.931 [2024-05-15 03:07:01.905959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.931 03:07:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.189 03:07:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=983874 00:12:31.189 03:07:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:31.189 03:07:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:31.189 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.121 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 205bf9dc-c941-44fa-991c-8c34e825fa17 MY_SNAPSHOT 00:12:32.378 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=045b4fbe-53c6-48e6-aaba-d6cdbbb8da65 00:12:32.378 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 205bf9dc-c941-44fa-991c-8c34e825fa17 30 00:12:32.636 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 045b4fbe-53c6-48e6-aaba-d6cdbbb8da65 MY_CLONE 00:12:32.894 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b233bc73-2e03-4069-a208-b05325220db8 00:12:32.894 03:07:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b233bc73-2e03-4069-a208-b05325220db8 00:12:33.459 03:07:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 983874 00:12:41.566 Initializing NVMe Controllers 00:12:41.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:41.566 Controller IO queue size 128, less than required. 00:12:41.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:41.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:41.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:41.566 Initialization complete. Launching workers. 00:12:41.566 ======================================================== 00:12:41.566 Latency(us) 00:12:41.566 Device Information : IOPS MiB/s Average min max 00:12:41.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11617.70 45.38 11024.82 1779.81 124563.69 00:12:41.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11430.30 44.65 11199.80 3682.61 51036.93 00:12:41.566 ======================================================== 00:12:41.566 Total : 23048.00 90.03 11111.60 1779.81 124563.69 00:12:41.566 00:12:41.566 03:07:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:41.566 03:07:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 205bf9dc-c941-44fa-991c-8c34e825fa17 00:12:41.825 03:07:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bcc2932-d6bd-43b1-bcca-b37007425327 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.083 rmmod nvme_tcp 00:12:42.083 rmmod nvme_fabrics 00:12:42.083 rmmod nvme_keyring 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 983376 ']' 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 983376 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 983376 ']' 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 983376 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 983376 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 983376' 00:12:42.083 killing process with pid 983376 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 983376 00:12:42.083 [2024-05-15 03:07:13.206387] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:42.083 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 983376 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.343 03:07:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.875 00:12:44.875 real 0m21.554s 00:12:44.875 user 1m4.331s 00:12:44.875 sys 0m6.512s 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 ************************************ 00:12:44.875 END TEST nvmf_lvol 00:12:44.875 ************************************ 00:12:44.875 03:07:15 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:44.875 03:07:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:44.875 03:07:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.875 03:07:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.875 ************************************ 00:12:44.875 START TEST nvmf_lvs_grow 00:12:44.875 ************************************ 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:44.875 * Looking for test storage... 00:12:44.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.875 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.876 03:07:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:50.178 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:50.178 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:50.178 Found net devices under 0000:86:00.0: cvl_0_0 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:50.178 Found net devices under 0000:86:00.1: cvl_0_1 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.178 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:50.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:50.179 00:12:50.179 --- 10.0.0.2 ping statistics --- 00:12:50.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.179 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:50.179 00:12:50.179 --- 10.0.0.1 ping statistics --- 00:12:50.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.179 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=989222 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 989222 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 989222 ']' 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:50.179 03:07:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.179 [2024-05-15 03:07:20.814688] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:50.179 [2024-05-15 03:07:20.814731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.179 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.179 [2024-05-15 03:07:20.873159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.179 [2024-05-15 03:07:20.944601] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.179 [2024-05-15 03:07:20.944641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.179 [2024-05-15 03:07:20.944648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.179 [2024-05-15 03:07:20.944654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.179 [2024-05-15 03:07:20.944659] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.179 [2024-05-15 03:07:20.944678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.744 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:50.745 [2024-05-15 03:07:21.799090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.745 ************************************ 00:12:50.745 START TEST lvs_grow_clean 00:12:50.745 ************************************ 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.745 03:07:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:51.004 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:51.004 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=65f8761a-33f1-4b5f-a8de-719406367132 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:51.262 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65f8761a-33f1-4b5f-a8de-719406367132 lvol 150 00:12:51.521 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c9b9b7bd-2b95-47f7-9f09-27d6223a4448 00:12:51.521 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:51.521 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:51.780 [2024-05-15 03:07:22.731993] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:51.780 [2024-05-15 03:07:22.732043] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:51.780 true 00:12:51.780 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:12:51.780 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:51.780 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:51.780 03:07:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:52.038 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c9b9b7bd-2b95-47f7-9f09-27d6223a4448 00:12:52.296 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:52.297 [2024-05-15 03:07:23.381769] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:52.297 [2024-05-15 03:07:23.382019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.297 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=989727 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 989727 /var/tmp/bdevperf.sock 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 989727 ']' 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:52.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.555 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:52.555 [2024-05-15 03:07:23.579100] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:12:52.555 [2024-05-15 03:07:23.579144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989727 ] 00:12:52.555 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.555 [2024-05-15 03:07:23.630479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.555 [2024-05-15 03:07:23.702401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.814 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.814 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:12:52.814 03:07:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:53.073 Nvme0n1 00:12:53.073 03:07:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:53.332 [ 00:12:53.332 { 00:12:53.332 "name": "Nvme0n1", 00:12:53.332 "aliases": [ 00:12:53.332 "c9b9b7bd-2b95-47f7-9f09-27d6223a4448" 00:12:53.332 ], 00:12:53.332 "product_name": "NVMe disk", 00:12:53.332 "block_size": 4096, 00:12:53.332 "num_blocks": 38912, 00:12:53.332 "uuid": "c9b9b7bd-2b95-47f7-9f09-27d6223a4448", 00:12:53.332 "assigned_rate_limits": { 00:12:53.332 "rw_ios_per_sec": 0, 00:12:53.332 "rw_mbytes_per_sec": 0, 00:12:53.332 "r_mbytes_per_sec": 0, 00:12:53.332 "w_mbytes_per_sec": 0 00:12:53.332 }, 00:12:53.332 "claimed": false, 00:12:53.332 "zoned": false, 00:12:53.332 "supported_io_types": { 00:12:53.332 "read": true, 00:12:53.332 "write": true, 00:12:53.332 "unmap": true, 00:12:53.332 "write_zeroes": true, 00:12:53.332 "flush": true, 00:12:53.332 "reset": true, 00:12:53.332 "compare": true, 00:12:53.332 "compare_and_write": true, 00:12:53.332 "abort": true, 00:12:53.332 "nvme_admin": true, 00:12:53.332 "nvme_io": true 00:12:53.332 }, 00:12:53.332 "memory_domains": [ 00:12:53.332 { 00:12:53.332 "dma_device_id": "system", 00:12:53.332 "dma_device_type": 1 00:12:53.332 } 00:12:53.332 ], 00:12:53.332 "driver_specific": { 00:12:53.332 "nvme": [ 00:12:53.332 { 00:12:53.332 "trid": { 00:12:53.332 "trtype": "TCP", 00:12:53.332 "adrfam": "IPv4", 00:12:53.332 "traddr": "10.0.0.2", 00:12:53.332 "trsvcid": "4420", 00:12:53.332 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:53.332 }, 00:12:53.332 "ctrlr_data": { 00:12:53.332 "cntlid": 1, 00:12:53.332 "vendor_id": "0x8086", 00:12:53.332 "model_number": "SPDK bdev Controller", 00:12:53.332 "serial_number": "SPDK0", 00:12:53.332 "firmware_revision": "24.05", 00:12:53.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:53.332 "oacs": { 00:12:53.332 "security": 0, 00:12:53.332 "format": 0, 00:12:53.332 "firmware": 0, 00:12:53.332 "ns_manage": 0 00:12:53.332 }, 00:12:53.332 "multi_ctrlr": true, 00:12:53.332 "ana_reporting": false 00:12:53.332 }, 00:12:53.332 "vs": { 00:12:53.332 "nvme_version": "1.3" 00:12:53.332 }, 00:12:53.332 "ns_data": { 00:12:53.332 "id": 1, 00:12:53.332 "can_share": true 00:12:53.332 } 00:12:53.332 } 00:12:53.332 ], 00:12:53.332 "mp_policy": "active_passive" 00:12:53.332 } 00:12:53.332 } 00:12:53.332 ] 00:12:53.332 03:07:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=989744 00:12:53.332 03:07:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:53.332 03:07:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:53.332 Running I/O for 10 seconds... 00:12:54.268 Latency(us) 00:12:54.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.269 Nvme0n1 : 1.00 22094.00 86.30 0.00 0.00 0.00 0.00 0.00 00:12:54.269 =================================================================================================================== 00:12:54.269 Total : 22094.00 86.30 0.00 0.00 0.00 0.00 0.00 00:12:54.269 00:12:55.206 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65f8761a-33f1-4b5f-a8de-719406367132 00:12:55.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.206 Nvme0n1 : 2.00 22163.00 86.57 0.00 0.00 0.00 0.00 0.00 00:12:55.206 =================================================================================================================== 00:12:55.206 Total : 22163.00 86.57 0.00 0.00 0.00 0.00 0.00 00:12:55.206 00:12:55.465 true 00:12:55.465 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:12:55.465 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:55.465 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:55.465 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:55.465 03:07:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 989744 00:12:56.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.400 Nvme0n1 : 3.00 22196.67 86.71 0.00 0.00 0.00 0.00 0.00 00:12:56.401 =================================================================================================================== 00:12:56.401 Total : 22196.67 86.71 0.00 0.00 0.00 0.00 0.00 00:12:56.401 00:12:57.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.337 Nvme0n1 : 4.00 22259.50 86.95 0.00 0.00 0.00 0.00 0.00 00:12:57.337 =================================================================================================================== 00:12:57.337 Total : 22259.50 86.95 0.00 0.00 0.00 0.00 0.00 00:12:57.337 00:12:58.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.272 Nvme0n1 : 5.00 22292.40 87.08 0.00 0.00 0.00 0.00 0.00 00:12:58.272 =================================================================================================================== 00:12:58.272 Total : 22292.40 87.08 0.00 0.00 0.00 0.00 0.00 00:12:58.272 00:12:59.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.210 Nvme0n1 : 6.00 22282.33 87.04 0.00 0.00 0.00 0.00 0.00 00:12:59.210 =================================================================================================================== 00:12:59.210 Total : 22282.33 87.04 0.00 0.00 0.00 0.00 0.00 00:12:59.210 00:13:00.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.585 Nvme0n1 : 7.00 22303.71 87.12 0.00 0.00 0.00 0.00 0.00 00:13:00.585 =================================================================================================================== 00:13:00.585 Total : 22303.71 87.12 0.00 0.00 0.00 0.00 0.00 00:13:00.585 00:13:01.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.520 Nvme0n1 : 8.00 22328.75 87.22 0.00 0.00 0.00 0.00 0.00 00:13:01.520 =================================================================================================================== 00:13:01.520 Total : 22328.75 87.22 0.00 0.00 0.00 0.00 0.00 00:13:01.520 00:13:02.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.456 Nvme0n1 : 9.00 22352.67 87.32 0.00 0.00 0.00 0.00 0.00 00:13:02.456 =================================================================================================================== 00:13:02.457 Total : 22352.67 87.32 0.00 0.00 0.00 0.00 0.00 00:13:02.457 00:13:03.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.393 Nvme0n1 : 10.00 22373.40 87.40 0.00 0.00 0.00 0.00 0.00 00:13:03.393 =================================================================================================================== 00:13:03.393 Total : 22373.40 87.40 0.00 0.00 0.00 0.00 0.00 00:13:03.393 00:13:03.393 00:13:03.393 Latency(us) 00:13:03.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.393 Nvme0n1 : 10.01 22373.29 87.40 0.00 0.00 5716.74 1588.54 7351.43 00:13:03.393 =================================================================================================================== 00:13:03.393 Total : 22373.29 87.40 0.00 0.00 5716.74 1588.54 7351.43 00:13:03.393 0 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 989727 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 989727 ']' 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 989727 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 989727 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 989727' 00:13:03.393 killing process with pid 989727 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 989727 00:13:03.393 Received shutdown signal, test time was about 10.000000 seconds 00:13:03.393 00:13:03.393 Latency(us) 00:13:03.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.393 =================================================================================================================== 00:13:03.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.393 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 989727 00:13:03.652 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.652 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:03.911 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:03.911 03:07:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:04.169 [2024-05-15 03:07:35.292001] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.169 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:04.428 request: 00:13:04.428 { 00:13:04.428 "uuid": "65f8761a-33f1-4b5f-a8de-719406367132", 00:13:04.428 "method": "bdev_lvol_get_lvstores", 00:13:04.428 "req_id": 1 00:13:04.428 } 00:13:04.428 Got JSON-RPC error response 00:13:04.428 response: 00:13:04.428 { 00:13:04.428 "code": -19, 00:13:04.428 "message": "No such device" 00:13:04.428 } 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.428 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.687 aio_bdev 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c9b9b7bd-2b95-47f7-9f09-27d6223a4448 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=c9b9b7bd-2b95-47f7-9f09-27d6223a4448 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:04.687 03:07:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c9b9b7bd-2b95-47f7-9f09-27d6223a4448 -t 2000 00:13:04.947 [ 00:13:04.947 { 00:13:04.947 "name": "c9b9b7bd-2b95-47f7-9f09-27d6223a4448", 00:13:04.947 "aliases": [ 00:13:04.947 "lvs/lvol" 00:13:04.947 ], 00:13:04.947 "product_name": "Logical Volume", 00:13:04.947 "block_size": 4096, 00:13:04.947 "num_blocks": 38912, 00:13:04.947 "uuid": "c9b9b7bd-2b95-47f7-9f09-27d6223a4448", 00:13:04.947 "assigned_rate_limits": { 00:13:04.947 "rw_ios_per_sec": 0, 00:13:04.947 "rw_mbytes_per_sec": 0, 00:13:04.947 "r_mbytes_per_sec": 0, 00:13:04.947 "w_mbytes_per_sec": 0 00:13:04.947 }, 00:13:04.947 "claimed": false, 00:13:04.947 "zoned": false, 00:13:04.947 "supported_io_types": { 00:13:04.947 "read": true, 00:13:04.947 "write": true, 00:13:04.947 "unmap": true, 00:13:04.947 "write_zeroes": true, 00:13:04.947 "flush": false, 00:13:04.947 "reset": true, 00:13:04.947 "compare": false, 00:13:04.947 "compare_and_write": false, 00:13:04.947 "abort": false, 00:13:04.947 "nvme_admin": false, 00:13:04.947 "nvme_io": false 00:13:04.947 }, 00:13:04.947 "driver_specific": { 00:13:04.947 "lvol": { 00:13:04.947 "lvol_store_uuid": "65f8761a-33f1-4b5f-a8de-719406367132", 00:13:04.947 "base_bdev": "aio_bdev", 00:13:04.947 "thin_provision": false, 00:13:04.947 "num_allocated_clusters": 38, 00:13:04.947 "snapshot": false, 00:13:04.947 "clone": false, 00:13:04.947 "esnap_clone": false 00:13:04.947 } 00:13:04.947 } 00:13:04.947 } 00:13:04.947 ] 00:13:04.947 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:13:04.947 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:04.947 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:05.206 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:05.206 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:05.206 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:05.206 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:05.206 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c9b9b7bd-2b95-47f7-9f09-27d6223a4448 00:13:05.465 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65f8761a-33f1-4b5f-a8de-719406367132 00:13:05.724 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.983 00:13:05.983 real 0m15.075s 00:13:05.983 user 0m14.712s 00:13:05.983 sys 0m1.330s 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 END TEST lvs_grow_clean 00:13:05.983 ************************************ 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 START TEST lvs_grow_dirty 00:13:05.983 ************************************ 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.983 03:07:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.983 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:06.242 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:06.243 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:06.243 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=65f10805-7c04-417c-89a3-693f889038d5 00:13:06.243 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:06.243 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:06.502 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:06.502 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:06.502 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65f10805-7c04-417c-89a3-693f889038d5 lvol 150 00:13:06.761 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:06.761 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:06.761 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:06.761 [2024-05-15 03:07:37.891709] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:06.761 [2024-05-15 03:07:37.891761] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:06.761 true 00:13:06.761 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:06.761 03:07:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:07.020 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:07.020 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:07.279 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:07.279 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:07.538 [2024-05-15 03:07:38.585805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.538 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=992318 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 992318 /var/tmp/bdevperf.sock 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 992318 ']' 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.797 03:07:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 [2024-05-15 03:07:38.819338] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:07.797 [2024-05-15 03:07:38.819383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992318 ] 00:13:07.797 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.797 [2024-05-15 03:07:38.873438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.797 [2024-05-15 03:07:38.952247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.735 03:07:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:08.735 03:07:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:08.735 03:07:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:08.994 Nvme0n1 00:13:08.994 03:07:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:08.994 [ 00:13:08.994 { 00:13:08.994 "name": "Nvme0n1", 00:13:08.994 "aliases": [ 00:13:08.994 "6a769fc7-4d16-4417-8edd-19eabd35d943" 00:13:08.994 ], 00:13:08.994 "product_name": "NVMe disk", 00:13:08.994 "block_size": 4096, 00:13:08.994 "num_blocks": 38912, 00:13:08.994 "uuid": "6a769fc7-4d16-4417-8edd-19eabd35d943", 00:13:08.994 "assigned_rate_limits": { 00:13:08.994 "rw_ios_per_sec": 0, 00:13:08.994 "rw_mbytes_per_sec": 0, 00:13:08.994 "r_mbytes_per_sec": 0, 00:13:08.994 "w_mbytes_per_sec": 0 00:13:08.994 }, 00:13:08.994 "claimed": false, 00:13:08.994 "zoned": false, 00:13:08.994 "supported_io_types": { 00:13:08.994 "read": true, 00:13:08.994 "write": true, 00:13:08.994 "unmap": true, 00:13:08.994 "write_zeroes": true, 00:13:08.994 "flush": true, 00:13:08.994 "reset": true, 00:13:08.994 "compare": true, 00:13:08.994 "compare_and_write": true, 00:13:08.994 "abort": true, 00:13:08.994 "nvme_admin": true, 00:13:08.994 "nvme_io": true 00:13:08.994 }, 00:13:08.994 "memory_domains": [ 00:13:08.994 { 00:13:08.994 "dma_device_id": "system", 00:13:08.994 "dma_device_type": 1 00:13:08.994 } 00:13:08.994 ], 00:13:08.994 "driver_specific": { 00:13:08.994 "nvme": [ 00:13:08.994 { 00:13:08.994 "trid": { 00:13:08.994 "trtype": "TCP", 00:13:08.994 "adrfam": "IPv4", 00:13:08.994 "traddr": "10.0.0.2", 00:13:08.994 "trsvcid": "4420", 00:13:08.994 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:08.994 }, 00:13:08.994 "ctrlr_data": { 00:13:08.994 "cntlid": 1, 00:13:08.994 "vendor_id": "0x8086", 00:13:08.994 "model_number": "SPDK bdev Controller", 00:13:08.994 "serial_number": "SPDK0", 00:13:08.994 "firmware_revision": "24.05", 00:13:08.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.994 "oacs": { 00:13:08.994 "security": 0, 00:13:08.994 "format": 0, 00:13:08.994 "firmware": 0, 00:13:08.994 "ns_manage": 0 00:13:08.994 }, 00:13:08.994 "multi_ctrlr": true, 00:13:08.994 "ana_reporting": false 00:13:08.994 }, 00:13:08.994 "vs": { 00:13:08.994 "nvme_version": "1.3" 00:13:08.994 }, 00:13:08.994 "ns_data": { 00:13:08.994 "id": 1, 00:13:08.994 "can_share": true 00:13:08.994 } 00:13:08.994 } 00:13:08.994 ], 00:13:08.994 "mp_policy": "active_passive" 00:13:08.994 } 00:13:08.994 } 00:13:08.994 ] 00:13:09.252 03:07:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=992546 00:13:09.252 03:07:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:09.252 03:07:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:09.252 Running I/O for 10 seconds... 00:13:10.285 Latency(us) 00:13:10.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.285 Nvme0n1 : 1.00 22804.00 89.08 0.00 0.00 0.00 0.00 0.00 00:13:10.285 =================================================================================================================== 00:13:10.285 Total : 22804.00 89.08 0.00 0.00 0.00 0.00 0.00 00:13:10.285 00:13:11.222 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:11.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.222 Nvme0n1 : 2.00 22962.50 89.70 0.00 0.00 0.00 0.00 0.00 00:13:11.222 =================================================================================================================== 00:13:11.222 Total : 22962.50 89.70 0.00 0.00 0.00 0.00 0.00 00:13:11.222 00:13:11.222 true 00:13:11.222 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:11.222 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:11.481 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:11.481 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:11.481 03:07:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 992546 00:13:12.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.417 Nvme0n1 : 3.00 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:13:12.417 =================================================================================================================== 00:13:12.417 Total : 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:13:12.417 00:13:13.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.354 Nvme0n1 : 4.00 23055.00 90.06 0.00 0.00 0.00 0.00 0.00 00:13:13.354 =================================================================================================================== 00:13:13.354 Total : 23055.00 90.06 0.00 0.00 0.00 0.00 0.00 00:13:13.354 00:13:14.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.289 Nvme0n1 : 5.00 23105.20 90.25 0.00 0.00 0.00 0.00 0.00 00:13:14.289 =================================================================================================================== 00:13:14.289 Total : 23105.20 90.25 0.00 0.00 0.00 0.00 0.00 00:13:14.289 00:13:15.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.221 Nvme0n1 : 6.00 23139.17 90.39 0.00 0.00 0.00 0.00 0.00 00:13:15.221 =================================================================================================================== 00:13:15.221 Total : 23139.17 90.39 0.00 0.00 0.00 0.00 0.00 00:13:15.221 00:13:16.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.157 Nvme0n1 : 7.00 23172.57 90.52 0.00 0.00 0.00 0.00 0.00 00:13:16.157 =================================================================================================================== 00:13:16.157 Total : 23172.57 90.52 0.00 0.00 0.00 0.00 0.00 00:13:16.157 00:13:17.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.534 Nvme0n1 : 8.00 23195.12 90.61 0.00 0.00 0.00 0.00 0.00 00:13:17.534 =================================================================================================================== 00:13:17.534 Total : 23195.12 90.61 0.00 0.00 0.00 0.00 0.00 00:13:17.534 00:13:18.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.472 Nvme0n1 : 9.00 23228.78 90.74 0.00 0.00 0.00 0.00 0.00 00:13:18.472 =================================================================================================================== 00:13:18.472 Total : 23228.78 90.74 0.00 0.00 0.00 0.00 0.00 00:13:18.472 00:13:19.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.409 Nvme0n1 : 10.00 23225.00 90.72 0.00 0.00 0.00 0.00 0.00 00:13:19.409 =================================================================================================================== 00:13:19.409 Total : 23225.00 90.72 0.00 0.00 0.00 0.00 0.00 00:13:19.409 00:13:19.409 00:13:19.409 Latency(us) 00:13:19.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.409 Nvme0n1 : 10.00 23230.41 90.74 0.00 0.00 5506.75 1652.65 11169.61 00:13:19.409 =================================================================================================================== 00:13:19.409 Total : 23230.41 90.74 0.00 0.00 5506.75 1652.65 11169.61 00:13:19.409 0 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 992318 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 992318 ']' 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 992318 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 992318 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 992318' 00:13:19.409 killing process with pid 992318 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 992318 00:13:19.409 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.409 00:13:19.409 Latency(us) 00:13:19.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.409 =================================================================================================================== 00:13:19.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 992318 00:13:19.409 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.667 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.926 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:19.926 03:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.926 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.926 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:19.926 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 989222 00:13:19.926 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 989222 00:13:20.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 989222 Killed "${NVMF_APP[@]}" "$@" 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=994377 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 994377 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 994377 ']' 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.186 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:20.186 [2024-05-15 03:07:51.181905] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:20.186 [2024-05-15 03:07:51.181953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.186 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.186 [2024-05-15 03:07:51.242671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.186 [2024-05-15 03:07:51.321058] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.186 [2024-05-15 03:07:51.321092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.186 [2024-05-15 03:07:51.321099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.186 [2024-05-15 03:07:51.321105] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.186 [2024-05-15 03:07:51.321110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.186 [2024-05-15 03:07:51.321127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.123 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:21.123 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:21.123 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.123 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.123 03:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:21.123 [2024-05-15 03:07:52.182865] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:21.123 [2024-05-15 03:07:52.182959] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:21.123 [2024-05-15 03:07:52.182984] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:21.123 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:21.382 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a769fc7-4d16-4417-8edd-19eabd35d943 -t 2000 00:13:21.382 [ 00:13:21.382 { 00:13:21.382 "name": "6a769fc7-4d16-4417-8edd-19eabd35d943", 00:13:21.382 "aliases": [ 00:13:21.382 "lvs/lvol" 00:13:21.382 ], 00:13:21.382 "product_name": "Logical Volume", 00:13:21.382 "block_size": 4096, 00:13:21.382 "num_blocks": 38912, 00:13:21.383 "uuid": "6a769fc7-4d16-4417-8edd-19eabd35d943", 00:13:21.383 "assigned_rate_limits": { 00:13:21.383 "rw_ios_per_sec": 0, 00:13:21.383 "rw_mbytes_per_sec": 0, 00:13:21.383 "r_mbytes_per_sec": 0, 00:13:21.383 "w_mbytes_per_sec": 0 00:13:21.383 }, 00:13:21.383 "claimed": false, 00:13:21.383 "zoned": false, 00:13:21.383 "supported_io_types": { 00:13:21.383 "read": true, 00:13:21.383 "write": true, 00:13:21.383 "unmap": true, 00:13:21.383 "write_zeroes": true, 00:13:21.383 "flush": false, 00:13:21.383 "reset": true, 00:13:21.383 "compare": false, 00:13:21.383 "compare_and_write": false, 00:13:21.383 "abort": false, 00:13:21.383 "nvme_admin": false, 00:13:21.383 "nvme_io": false 00:13:21.383 }, 00:13:21.383 "driver_specific": { 00:13:21.383 "lvol": { 00:13:21.383 "lvol_store_uuid": "65f10805-7c04-417c-89a3-693f889038d5", 00:13:21.383 "base_bdev": "aio_bdev", 00:13:21.383 "thin_provision": false, 00:13:21.383 "num_allocated_clusters": 38, 00:13:21.383 "snapshot": false, 00:13:21.383 "clone": false, 00:13:21.383 "esnap_clone": false 00:13:21.383 } 00:13:21.383 } 00:13:21.383 } 00:13:21.383 ] 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:21.643 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:21.903 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:21.903 03:07:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:21.903 [2024-05-15 03:07:53.035405] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:22.162 request: 00:13:22.162 { 00:13:22.162 "uuid": "65f10805-7c04-417c-89a3-693f889038d5", 00:13:22.162 "method": "bdev_lvol_get_lvstores", 00:13:22.162 "req_id": 1 00:13:22.162 } 00:13:22.162 Got JSON-RPC error response 00:13:22.162 response: 00:13:22.162 { 00:13:22.162 "code": -19, 00:13:22.162 "message": "No such device" 00:13:22.162 } 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.162 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:22.421 aio_bdev 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:22.421 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:22.680 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a769fc7-4d16-4417-8edd-19eabd35d943 -t 2000 00:13:22.680 [ 00:13:22.680 { 00:13:22.680 "name": "6a769fc7-4d16-4417-8edd-19eabd35d943", 00:13:22.680 "aliases": [ 00:13:22.680 "lvs/lvol" 00:13:22.680 ], 00:13:22.680 "product_name": "Logical Volume", 00:13:22.680 "block_size": 4096, 00:13:22.680 "num_blocks": 38912, 00:13:22.680 "uuid": "6a769fc7-4d16-4417-8edd-19eabd35d943", 00:13:22.680 "assigned_rate_limits": { 00:13:22.680 "rw_ios_per_sec": 0, 00:13:22.680 "rw_mbytes_per_sec": 0, 00:13:22.680 "r_mbytes_per_sec": 0, 00:13:22.680 "w_mbytes_per_sec": 0 00:13:22.680 }, 00:13:22.680 "claimed": false, 00:13:22.680 "zoned": false, 00:13:22.680 "supported_io_types": { 00:13:22.680 "read": true, 00:13:22.680 "write": true, 00:13:22.680 "unmap": true, 00:13:22.680 "write_zeroes": true, 00:13:22.680 "flush": false, 00:13:22.680 "reset": true, 00:13:22.680 "compare": false, 00:13:22.680 "compare_and_write": false, 00:13:22.680 "abort": false, 00:13:22.680 "nvme_admin": false, 00:13:22.680 "nvme_io": false 00:13:22.680 }, 00:13:22.680 "driver_specific": { 00:13:22.680 "lvol": { 00:13:22.680 "lvol_store_uuid": "65f10805-7c04-417c-89a3-693f889038d5", 00:13:22.680 "base_bdev": "aio_bdev", 00:13:22.680 "thin_provision": false, 00:13:22.680 "num_allocated_clusters": 38, 00:13:22.680 "snapshot": false, 00:13:22.680 "clone": false, 00:13:22.680 "esnap_clone": false 00:13:22.680 } 00:13:22.680 } 00:13:22.680 } 00:13:22.680 ] 00:13:22.680 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:22.680 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:22.680 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:22.940 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:22.940 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:22.940 03:07:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:22.940 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:22.940 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6a769fc7-4d16-4417-8edd-19eabd35d943 00:13:23.199 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65f10805-7c04-417c-89a3-693f889038d5 00:13:23.458 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:23.458 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:23.717 00:13:23.717 real 0m17.654s 00:13:23.717 user 0m44.402s 00:13:23.717 sys 0m3.899s 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:23.717 ************************************ 00:13:23.717 END TEST lvs_grow_dirty 00:13:23.717 ************************************ 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:23.717 nvmf_trace.0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.717 rmmod nvme_tcp 00:13:23.717 rmmod nvme_fabrics 00:13:23.717 rmmod nvme_keyring 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 994377 ']' 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 994377 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 994377 ']' 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 994377 00:13:23.717 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 994377 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 994377' 00:13:23.718 killing process with pid 994377 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 994377 00:13:23.718 03:07:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 994377 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.976 03:07:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.513 03:07:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.513 00:13:26.513 real 0m41.519s 00:13:26.513 user 1m4.699s 00:13:26.513 sys 0m9.529s 00:13:26.513 03:07:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:26.513 03:07:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:26.513 ************************************ 00:13:26.513 END TEST nvmf_lvs_grow 00:13:26.513 ************************************ 00:13:26.513 03:07:57 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.513 03:07:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:26.513 03:07:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.513 03:07:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.513 ************************************ 00:13:26.513 START TEST nvmf_bdev_io_wait 00:13:26.513 ************************************ 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.513 * Looking for test storage... 00:13:26.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.513 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.514 03:07:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.711 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.711 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.711 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.970 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.970 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.970 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.971 03:08:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:30.971 00:13:30.971 --- 10.0.0.2 ping statistics --- 00:13:30.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.971 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:13:30.971 00:13:30.971 --- 10.0.0.1 ping statistics --- 00:13:30.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.971 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=998346 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 998346 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 998346 ']' 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:30.971 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:30.971 [2024-05-15 03:08:02.129561] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:30.971 [2024-05-15 03:08:02.129605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.230 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.230 [2024-05-15 03:08:02.185854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.230 [2024-05-15 03:08:02.262406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.230 [2024-05-15 03:08:02.262446] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.230 [2024-05-15 03:08:02.262452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.230 [2024-05-15 03:08:02.262460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.230 [2024-05-15 03:08:02.262468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.230 [2024-05-15 03:08:02.262568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.230 [2024-05-15 03:08:02.262665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.230 [2024-05-15 03:08:02.262752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.230 [2024-05-15 03:08:02.262753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.797 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.797 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:13:31.797 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.797 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.797 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 [2024-05-15 03:08:03.057458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 Malloc0 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.057 [2024-05-15 03:08:03.120178] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:32.057 [2024-05-15 03:08:03.120413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=998472 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=998474 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.057 { 00:13:32.057 "params": { 00:13:32.057 "name": "Nvme$subsystem", 00:13:32.057 "trtype": "$TEST_TRANSPORT", 00:13:32.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.057 "adrfam": "ipv4", 00:13:32.057 "trsvcid": "$NVMF_PORT", 00:13:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.057 "hdgst": ${hdgst:-false}, 00:13:32.057 "ddgst": ${ddgst:-false} 00:13:32.057 }, 00:13:32.057 "method": "bdev_nvme_attach_controller" 00:13:32.057 } 00:13:32.057 EOF 00:13:32.057 )") 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=998476 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.057 { 00:13:32.057 "params": { 00:13:32.057 "name": "Nvme$subsystem", 00:13:32.057 "trtype": "$TEST_TRANSPORT", 00:13:32.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.057 "adrfam": "ipv4", 00:13:32.057 "trsvcid": "$NVMF_PORT", 00:13:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.057 "hdgst": ${hdgst:-false}, 00:13:32.057 "ddgst": ${ddgst:-false} 00:13:32.057 }, 00:13:32.057 "method": "bdev_nvme_attach_controller" 00:13:32.057 } 00:13:32.057 EOF 00:13:32.057 )") 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=998479 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.057 { 00:13:32.057 "params": { 00:13:32.057 "name": "Nvme$subsystem", 00:13:32.057 "trtype": "$TEST_TRANSPORT", 00:13:32.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.057 "adrfam": "ipv4", 00:13:32.057 "trsvcid": "$NVMF_PORT", 00:13:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.057 "hdgst": ${hdgst:-false}, 00:13:32.057 "ddgst": ${ddgst:-false} 00:13:32.057 }, 00:13:32.057 "method": "bdev_nvme_attach_controller" 00:13:32.057 } 00:13:32.057 EOF 00:13:32.057 )") 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.057 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.057 { 00:13:32.057 "params": { 00:13:32.057 "name": "Nvme$subsystem", 00:13:32.057 "trtype": "$TEST_TRANSPORT", 00:13:32.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.057 "adrfam": "ipv4", 00:13:32.057 "trsvcid": "$NVMF_PORT", 00:13:32.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.058 "hdgst": ${hdgst:-false}, 00:13:32.058 "ddgst": ${ddgst:-false} 00:13:32.058 }, 00:13:32.058 "method": "bdev_nvme_attach_controller" 00:13:32.058 } 00:13:32.058 EOF 00:13:32.058 )") 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 998472 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.058 "params": { 00:13:32.058 "name": "Nvme1", 00:13:32.058 "trtype": "tcp", 00:13:32.058 "traddr": "10.0.0.2", 00:13:32.058 "adrfam": "ipv4", 00:13:32.058 "trsvcid": "4420", 00:13:32.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.058 "hdgst": false, 00:13:32.058 "ddgst": false 00:13:32.058 }, 00:13:32.058 "method": "bdev_nvme_attach_controller" 00:13:32.058 }' 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.058 "params": { 00:13:32.058 "name": "Nvme1", 00:13:32.058 "trtype": "tcp", 00:13:32.058 "traddr": "10.0.0.2", 00:13:32.058 "adrfam": "ipv4", 00:13:32.058 "trsvcid": "4420", 00:13:32.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.058 "hdgst": false, 00:13:32.058 "ddgst": false 00:13:32.058 }, 00:13:32.058 "method": "bdev_nvme_attach_controller" 00:13:32.058 }' 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.058 "params": { 00:13:32.058 "name": "Nvme1", 00:13:32.058 "trtype": "tcp", 00:13:32.058 "traddr": "10.0.0.2", 00:13:32.058 "adrfam": "ipv4", 00:13:32.058 "trsvcid": "4420", 00:13:32.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.058 "hdgst": false, 00:13:32.058 "ddgst": false 00:13:32.058 }, 00:13:32.058 "method": "bdev_nvme_attach_controller" 00:13:32.058 }' 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.058 03:08:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.058 "params": { 00:13:32.058 "name": "Nvme1", 00:13:32.058 "trtype": "tcp", 00:13:32.058 "traddr": "10.0.0.2", 00:13:32.058 "adrfam": "ipv4", 00:13:32.058 "trsvcid": "4420", 00:13:32.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.058 "hdgst": false, 00:13:32.058 "ddgst": false 00:13:32.058 }, 00:13:32.058 "method": "bdev_nvme_attach_controller" 00:13:32.058 }' 00:13:32.058 [2024-05-15 03:08:03.168011] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:32.058 [2024-05-15 03:08:03.168011] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:32.058 [2024-05-15 03:08:03.168062] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 03:08:03.168062] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:32.058 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:32.058 [2024-05-15 03:08:03.170801] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:32.058 [2024-05-15 03:08:03.170853] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:32.058 [2024-05-15 03:08:03.174476] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:32.058 [2024-05-15 03:08:03.174516] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:32.316 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.316 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.316 [2024-05-15 03:08:03.352290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.316 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.316 [2024-05-15 03:08:03.429963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:32.316 [2024-05-15 03:08:03.443775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.574 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.574 [2024-05-15 03:08:03.518721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:32.574 [2024-05-15 03:08:03.540543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.574 [2024-05-15 03:08:03.582741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.574 [2024-05-15 03:08:03.624901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:32.574 [2024-05-15 03:08:03.661156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:32.832 Running I/O for 1 seconds... 00:13:32.832 Running I/O for 1 seconds... 00:13:32.832 Running I/O for 1 seconds... 00:13:32.832 Running I/O for 1 seconds... 00:13:33.792 00:13:33.792 Latency(us) 00:13:33.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.792 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:33.792 Nvme1n1 : 1.01 7877.89 30.77 0.00 0.00 16184.04 6582.09 24390.79 00:13:33.792 =================================================================================================================== 00:13:33.792 Total : 7877.89 30.77 0.00 0.00 16184.04 6582.09 24390.79 00:13:33.792 00:13:33.792 Latency(us) 00:13:33.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.792 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:33.792 Nvme1n1 : 1.00 7543.88 29.47 0.00 0.00 16923.33 4644.51 39891.48 00:13:33.792 =================================================================================================================== 00:13:33.792 Total : 7543.88 29.47 0.00 0.00 16923.33 4644.51 39891.48 00:13:33.792 00:13:33.792 Latency(us) 00:13:33.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.792 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:33.792 Nvme1n1 : 1.01 11119.00 43.43 0.00 0.00 11471.96 6895.53 23478.98 00:13:33.792 =================================================================================================================== 00:13:33.792 Total : 11119.00 43.43 0.00 0.00 11471.96 6895.53 23478.98 00:13:34.050 03:08:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 998474 00:13:34.050 00:13:34.050 Latency(us) 00:13:34.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.050 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:34.050 Nvme1n1 : 1.00 246087.28 961.28 0.00 0.00 517.67 212.81 694.54 00:13:34.050 =================================================================================================================== 00:13:34.050 Total : 246087.28 961.28 0.00 0.00 517.67 212.81 694.54 00:13:34.050 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 998476 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 998479 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.309 rmmod nvme_tcp 00:13:34.309 rmmod nvme_fabrics 00:13:34.309 rmmod nvme_keyring 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 998346 ']' 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 998346 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 998346 ']' 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 998346 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 998346 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:34.309 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:34.310 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 998346' 00:13:34.310 killing process with pid 998346 00:13:34.310 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 998346 00:13:34.310 [2024-05-15 03:08:05.318625] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:34.310 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 998346 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.568 03:08:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.472 03:08:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.472 00:13:36.472 real 0m10.401s 00:13:36.472 user 0m19.906s 00:13:36.472 sys 0m5.162s 00:13:36.472 03:08:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.472 03:08:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:36.472 ************************************ 00:13:36.472 END TEST nvmf_bdev_io_wait 00:13:36.472 ************************************ 00:13:36.472 03:08:07 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.472 03:08:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:36.472 03:08:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.472 03:08:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 ************************************ 00:13:36.732 START TEST nvmf_queue_depth 00:13:36.732 ************************************ 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.732 * Looking for test storage... 00:13:36.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.732 03:08:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.733 03:08:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.337 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.338 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.338 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.338 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:42.338 00:13:42.338 --- 10.0.0.2 ping statistics --- 00:13:42.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.338 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:42.338 03:08:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:13:42.338 00:13:42.338 --- 10.0.0.1 ping statistics --- 00:13:42.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.338 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1002302 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1002302 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1002302 ']' 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.338 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.338 [2024-05-15 03:08:13.089908] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:42.338 [2024-05-15 03:08:13.089950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.338 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.338 [2024-05-15 03:08:13.146917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.338 [2024-05-15 03:08:13.228803] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.338 [2024-05-15 03:08:13.228838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.338 [2024-05-15 03:08:13.228845] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.338 [2024-05-15 03:08:13.228852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.338 [2024-05-15 03:08:13.228857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.338 [2024-05-15 03:08:13.228880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.907 [2024-05-15 03:08:13.928230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.907 Malloc0 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.907 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.908 [2024-05-15 03:08:13.988457] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:42.908 [2024-05-15 03:08:13.988696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1002502 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1002502 /var/tmp/bdevperf.sock 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1002502 ']' 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.908 03:08:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.908 [2024-05-15 03:08:14.036596] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:13:42.908 [2024-05-15 03:08:14.036636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002502 ] 00:13:42.908 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.167 [2024-05-15 03:08:14.089962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.167 [2024-05-15 03:08:14.162414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.736 03:08:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:43.736 03:08:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:43.736 03:08:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:43.736 03:08:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.736 03:08:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.995 NVMe0n1 00:13:43.995 03:08:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.995 03:08:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:43.995 Running I/O for 10 seconds... 00:13:56.203 00:13:56.203 Latency(us) 00:13:56.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.203 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:56.203 Verification LBA range: start 0x0 length 0x4000 00:13:56.203 NVMe0n1 : 10.05 12330.62 48.17 0.00 0.00 82751.15 10257.81 56303.97 00:13:56.203 =================================================================================================================== 00:13:56.203 Total : 12330.62 48.17 0.00 0.00 82751.15 10257.81 56303.97 00:13:56.203 0 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1002502 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1002502 ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1002502 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1002502 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1002502' 00:13:56.203 killing process with pid 1002502 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1002502 00:13:56.203 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.203 00:13:56.203 Latency(us) 00:13:56.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.203 =================================================================================================================== 00:13:56.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1002502 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.203 rmmod nvme_tcp 00:13:56.203 rmmod nvme_fabrics 00:13:56.203 rmmod nvme_keyring 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1002302 ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1002302 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1002302 ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1002302 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1002302 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1002302' 00:13:56.203 killing process with pid 1002302 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1002302 00:13:56.203 [2024-05-15 03:08:25.571332] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1002302 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.203 03:08:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.772 03:08:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.772 00:13:56.772 real 0m20.209s 00:13:56.772 user 0m24.995s 00:13:56.772 sys 0m5.436s 00:13:56.772 03:08:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.772 03:08:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:56.772 ************************************ 00:13:56.772 END TEST nvmf_queue_depth 00:13:56.772 ************************************ 00:13:56.772 03:08:27 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:56.772 03:08:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:56.772 03:08:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.772 03:08:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.772 ************************************ 00:13:56.772 START TEST nvmf_target_multipath 00:13:56.772 ************************************ 00:13:56.772 03:08:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:57.032 * Looking for test storage... 00:13:57.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.032 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.033 03:08:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.306 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.306 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.306 03:08:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:14:02.306 00:14:02.306 --- 10.0.0.2 ping statistics --- 00:14:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.306 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:02.306 00:14:02.306 --- 10.0.0.1 ping statistics --- 00:14:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.306 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:02.306 only one NIC for nvmf test 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.306 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.307 rmmod nvme_tcp 00:14:02.307 rmmod nvme_fabrics 00:14:02.307 rmmod nvme_keyring 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.307 03:08:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.211 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.212 00:14:04.212 real 0m7.305s 00:14:04.212 user 0m1.369s 00:14:04.212 sys 0m3.819s 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:04.212 03:08:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 ************************************ 00:14:04.212 END TEST nvmf_target_multipath 00:14:04.212 ************************************ 00:14:04.212 03:08:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.212 03:08:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:04.212 03:08:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:04.212 03:08:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 ************************************ 00:14:04.212 START TEST nvmf_zcopy 00:14:04.212 ************************************ 00:14:04.212 03:08:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.212 * Looking for test storage... 00:14:04.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.472 03:08:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:09.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:09.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:09.753 Found net devices under 0000:86:00.0: cvl_0_0 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:09.753 Found net devices under 0000:86:00.1: cvl_0_1 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.753 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:14:09.754 00:14:09.754 --- 10.0.0.2 ping statistics --- 00:14:09.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.754 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:14:09.754 00:14:09.754 --- 10.0.0.1 ping statistics --- 00:14:09.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.754 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1011134 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1011134 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1011134 ']' 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.754 03:08:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.754 [2024-05-15 03:08:40.667865] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:09.754 [2024-05-15 03:08:40.667907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.754 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.754 [2024-05-15 03:08:40.727023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.754 [2024-05-15 03:08:40.799785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.754 [2024-05-15 03:08:40.799823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.754 [2024-05-15 03:08:40.799830] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.754 [2024-05-15 03:08:40.799836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.754 [2024-05-15 03:08:40.799841] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.754 [2024-05-15 03:08:40.799860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.322 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.322 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:14:10.322 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.322 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.322 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 [2024-05-15 03:08:41.502817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 [2024-05-15 03:08:41.526827] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:10.581 [2024-05-15 03:08:41.527011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 malloc0 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.581 { 00:14:10.581 "params": { 00:14:10.581 "name": "Nvme$subsystem", 00:14:10.581 "trtype": "$TEST_TRANSPORT", 00:14:10.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.581 "adrfam": "ipv4", 00:14:10.581 "trsvcid": "$NVMF_PORT", 00:14:10.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.581 "hdgst": ${hdgst:-false}, 00:14:10.581 "ddgst": ${ddgst:-false} 00:14:10.581 }, 00:14:10.581 "method": "bdev_nvme_attach_controller" 00:14:10.581 } 00:14:10.581 EOF 00:14:10.581 )") 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:10.581 03:08:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.581 "params": { 00:14:10.581 "name": "Nvme1", 00:14:10.581 "trtype": "tcp", 00:14:10.581 "traddr": "10.0.0.2", 00:14:10.581 "adrfam": "ipv4", 00:14:10.581 "trsvcid": "4420", 00:14:10.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.581 "hdgst": false, 00:14:10.581 "ddgst": false 00:14:10.581 }, 00:14:10.581 "method": "bdev_nvme_attach_controller" 00:14:10.581 }' 00:14:10.581 [2024-05-15 03:08:41.610304] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:10.581 [2024-05-15 03:08:41.610346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011377 ] 00:14:10.581 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.581 [2024-05-15 03:08:41.663669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.581 [2024-05-15 03:08:41.738129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.150 Running I/O for 10 seconds... 00:14:21.188 00:14:21.188 Latency(us) 00:14:21.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:21.188 Verification LBA range: start 0x0 length 0x1000 00:14:21.188 Nvme1n1 : 10.01 8700.17 67.97 0.00 0.00 14669.41 1951.83 24846.69 00:14:21.189 =================================================================================================================== 00:14:21.189 Total : 8700.17 67.97 0.00 0.00 14669.41 1951.83 24846.69 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1013155 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.189 { 00:14:21.189 "params": { 00:14:21.189 "name": "Nvme$subsystem", 00:14:21.189 "trtype": "$TEST_TRANSPORT", 00:14:21.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.189 "adrfam": "ipv4", 00:14:21.189 "trsvcid": "$NVMF_PORT", 00:14:21.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.189 "hdgst": ${hdgst:-false}, 00:14:21.189 "ddgst": ${ddgst:-false} 00:14:21.189 }, 00:14:21.189 "method": "bdev_nvme_attach_controller" 00:14:21.189 } 00:14:21.189 EOF 00:14:21.189 )") 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:21.189 [2024-05-15 03:08:52.303797] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.189 [2024-05-15 03:08:52.303834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:21.189 03:08:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.189 "params": { 00:14:21.189 "name": "Nvme1", 00:14:21.189 "trtype": "tcp", 00:14:21.189 "traddr": "10.0.0.2", 00:14:21.189 "adrfam": "ipv4", 00:14:21.189 "trsvcid": "4420", 00:14:21.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.189 "hdgst": false, 00:14:21.189 "ddgst": false 00:14:21.189 }, 00:14:21.189 "method": "bdev_nvme_attach_controller" 00:14:21.189 }' 00:14:21.189 [2024-05-15 03:08:52.315794] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.189 [2024-05-15 03:08:52.315806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.189 [2024-05-15 03:08:52.323809] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.189 [2024-05-15 03:08:52.323819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.189 [2024-05-15 03:08:52.335841] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.189 [2024-05-15 03:08:52.335855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.189 [2024-05-15 03:08:52.340388] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:21.189 [2024-05-15 03:08:52.340431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013155 ] 00:14:21.189 [2024-05-15 03:08:52.347875] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.189 [2024-05-15 03:08:52.347886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.447 [2024-05-15 03:08:52.359906] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.447 [2024-05-15 03:08:52.359915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.447 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.447 [2024-05-15 03:08:52.371939] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.447 [2024-05-15 03:08:52.371949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.447 [2024-05-15 03:08:52.383972] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.447 [2024-05-15 03:08:52.383983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.447 [2024-05-15 03:08:52.393176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.447 [2024-05-15 03:08:52.396000] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.447 [2024-05-15 03:08:52.396010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.408034] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.408047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.420066] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.420075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.432103] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.432120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.444128] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.444143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.456159] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.456168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.468193] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.468203] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.469735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.448 [2024-05-15 03:08:52.480229] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.480244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.492260] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.492276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.504291] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.504304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.516317] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.516328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.528353] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.528370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.540386] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.540397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.552412] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.552422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.564463] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.564486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.576487] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.576500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.588524] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.588538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.448 [2024-05-15 03:08:52.600549] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.448 [2024-05-15 03:08:52.600558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.612582] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.612592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.624615] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.624625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.636651] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.636666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.648680] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.648690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.660711] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.660721] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.672746] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.672755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.684784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.684798] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.696813] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.696822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.708850] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.708863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.720881] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.720892] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.732916] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.732928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.744945] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.744955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.756977] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.756987] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.769011] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.769022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.781053] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.781070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 Running I/O for 5 seconds... 00:14:21.707 [2024-05-15 03:08:52.793079] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.793090] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.804950] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.804969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.818969] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.818988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.827805] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.827824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.837131] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.837150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.845817] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.845835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.707 [2024-05-15 03:08:52.855503] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.707 [2024-05-15 03:08:52.855521] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.870109] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.870127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.883995] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.884013] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.892854] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.892872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.901593] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.901611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.910226] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.910244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.924884] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.924903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.935981] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.936001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.944748] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.944766] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.959149] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.959167] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.972569] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.972588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.986498] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.986516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:52.995276] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:52.995293] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.003880] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.003898] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.013074] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.013092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.027659] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.027684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.038351] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.038369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.047087] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.047105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.055649] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.055667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.062552] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.062569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.077830] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.966 [2024-05-15 03:08:53.077849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.966 [2024-05-15 03:08:53.092072] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.967 [2024-05-15 03:08:53.092091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.967 [2024-05-15 03:08:53.100834] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.967 [2024-05-15 03:08:53.100852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.967 [2024-05-15 03:08:53.114899] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.967 [2024-05-15 03:08:53.114917] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.128431] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.128450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.142514] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.142533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.156024] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.156043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.164854] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.164872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.173495] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.173513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.182051] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.182069] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.191061] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.191078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.205591] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.205609] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.214795] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.214813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.224219] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.224237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.232809] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.232827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.242209] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.242227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.256514] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.256532] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.270448] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.270472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.281811] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.281829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.290674] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.290691] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.305322] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.305340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.315821] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.315839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.324957] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.324975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.333506] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.333523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.342784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.342803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.352384] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.352402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.366447] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.366471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.225 [2024-05-15 03:08:53.379884] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.225 [2024-05-15 03:08:53.379906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.489 [2024-05-15 03:08:53.393973] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.489 [2024-05-15 03:08:53.393993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.489 [2024-05-15 03:08:53.407711] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.489 [2024-05-15 03:08:53.407731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.489 [2024-05-15 03:08:53.418603] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.489 [2024-05-15 03:08:53.418622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.489 [2024-05-15 03:08:53.432962] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.489 [2024-05-15 03:08:53.432982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.489 [2024-05-15 03:08:53.446781] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.446800] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.461096] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.461113] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.472443] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.472462] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.481646] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.481665] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.495480] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.495499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.504505] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.504524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.513410] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.513429] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.522643] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.522661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.537258] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.537276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.548076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.548095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.562327] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.562346] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.571292] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.571311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.580013] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.580033] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.594620] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.594639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.605079] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.605103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.613837] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.613855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.623158] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.623177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.490 [2024-05-15 03:08:53.637368] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.490 [2024-05-15 03:08:53.637387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.752 [2024-05-15 03:08:53.651109] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.752 [2024-05-15 03:08:53.651128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.752 [2024-05-15 03:08:53.665260] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.752 [2024-05-15 03:08:53.665279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.752 [2024-05-15 03:08:53.674347] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.752 [2024-05-15 03:08:53.674367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.752 [2024-05-15 03:08:53.683254] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.752 [2024-05-15 03:08:53.683272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.752 [2024-05-15 03:08:53.698227] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.752 [2024-05-15 03:08:53.698245] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.713488] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.713507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.727449] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.727474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.736483] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.736501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.745176] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.745195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.759951] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.759971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.770810] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.770829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.785127] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.785145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.794251] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.794269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.803188] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.803206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.817617] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.817634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.826402] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.826423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.840722] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.840740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.849498] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.849515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.858611] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.858628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.867784] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.867802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.882330] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.882349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.892959] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.892977] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.753 [2024-05-15 03:08:53.907474] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.753 [2024-05-15 03:08:53.907491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.920952] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.920970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.934783] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.934801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.943695] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.943712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.958212] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.958230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.971521] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.971538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.985231] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.985249] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:53.999089] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:53.999107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.007827] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.007845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.022178] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.022195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.031082] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.031100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.044838] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.044856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.053622] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.053643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.062420] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.062438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.076924] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.076941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.090607] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.090625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.099549] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.099566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.113634] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.113652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.122406] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.122424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.136493] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.136512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.150052] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.150070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.164220] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.164239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.012 [2024-05-15 03:08:54.171946] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.012 [2024-05-15 03:08:54.171964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.185690] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.185708] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.199682] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.199699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.208477] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.208494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.217289] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.217306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.226514] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.226531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.235657] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.235674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.250085] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.250103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.263916] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.263934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.277782] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.277800] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.271 [2024-05-15 03:08:54.291610] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.271 [2024-05-15 03:08:54.291627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.305651] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.305669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.321046] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.321064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.329760] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.329778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.338938] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.338955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.348177] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.348194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.357840] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.357857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.372361] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.372379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.381374] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.381392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.390076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.390093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.399328] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.399345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.408715] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.408733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.272 [2024-05-15 03:08:54.423108] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.272 [2024-05-15 03:08:54.423126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.437045] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.437064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.447659] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.447677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.456952] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.456969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.466079] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.466096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.480241] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.480258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.493888] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.493907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.507568] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.507585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.516316] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.516334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.525214] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.525231] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.539562] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.539580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.553167] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.553185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.562139] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.562156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.576172] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.576190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.585050] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.585067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.599632] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.599650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.608787] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.608804] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.617437] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.617454] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.626701] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.626719] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.641031] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.641050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.654698] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.654715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.668529] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.668548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.677238] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.677258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.531 [2024-05-15 03:08:54.685925] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.531 [2024-05-15 03:08:54.685943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.700396] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.700415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.714433] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.714451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.723614] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.723631] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.732560] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.732579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.741277] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.741294] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.750697] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.750715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.765383] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.765401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.775923] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.775941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.790090] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.790109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.798976] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.798995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.808171] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.808188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.822661] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.822679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.831364] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.831382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.845820] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.845838] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.854436] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.854454] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.869256] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.869274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.885003] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.885021] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.899076] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.899096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.912525] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.912544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.921344] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.921362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.930301] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.930319] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.790 [2024-05-15 03:08:54.944607] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.790 [2024-05-15 03:08:54.944626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:54.953828] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:54.953847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:54.967896] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:54.967914] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:54.976725] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:54.976743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:54.990867] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:54.990886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.004576] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.004594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.013389] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.013407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.022022] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.022040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.030476] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.030493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.045336] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.045354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.060976] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.060994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.074660] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.074678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.088304] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.088323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.102044] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.102063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.115730] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.115749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.129547] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.129566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.138544] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.138561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.148275] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.148297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.157507] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.157526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.171561] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.171580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.185177] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.185195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.194160] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.194177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.049 [2024-05-15 03:08:55.202945] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.049 [2024-05-15 03:08:55.202963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.217577] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.217595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.231246] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.231264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.244831] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.244849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.258795] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.258813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.267636] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.267654] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.276416] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.276434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.285611] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.285629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.300305] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.300323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.309311] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.309329] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.324129] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.324146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.339878] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.339896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.306 [2024-05-15 03:08:55.353537] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.306 [2024-05-15 03:08:55.353554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.367345] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.367363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.376548] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.376569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.390893] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.390911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.399431] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.399449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.408645] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.408662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.423649] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.423667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.439120] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.439139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.447958] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.447976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.307 [2024-05-15 03:08:55.457075] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.307 [2024-05-15 03:08:55.457093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.564 [2024-05-15 03:08:55.472015] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.564 [2024-05-15 03:08:55.472032] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.486792] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.486810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.495804] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.495822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.505128] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.505146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.519162] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.519179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.527933] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.527950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.542685] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.542703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.551589] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.551607] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.560373] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.560401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.574474] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.574492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.583191] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.583208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.597526] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.597548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.606427] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.606445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.620087] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.620105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.628913] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.628930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.637593] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.637611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.651946] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.651963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.660925] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.660943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.669501] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.669519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.678777] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.678795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.693221] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.693240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.707032] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.707051] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.565 [2024-05-15 03:08:55.720976] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.565 [2024-05-15 03:08:55.720994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.735241] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.735260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.749191] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.749210] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.762839] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.762857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.776498] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.776516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.785268] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.785286] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.799434] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.799451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.812604] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.812623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.821620] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.821642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.835945] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.835963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.849588] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.849605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.858381] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.858398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.872593] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.872611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.886115] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.886133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.900026] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.900043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.913535] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.913553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.927197] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.927214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.941566] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.941584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.953000] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.953017] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.967067] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.967085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.823 [2024-05-15 03:08:55.980758] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.823 [2024-05-15 03:08:55.980776] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:55.994857] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:55.994875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.003697] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.003715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.013096] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.013114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.027442] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.027459] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.041005] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.041023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.049949] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.049966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.059078] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.059095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.073949] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.073967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.088991] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.089009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.097834] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.097851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.106427] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.106445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.115020] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.115037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.124370] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.124388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.138515] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.138533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.152258] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.152276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.165619] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.165637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.179267] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.179286] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.192662] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.192681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.206527] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.206545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.220265] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.220284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.233778] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.233797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.082 [2024-05-15 03:08:56.242621] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.082 [2024-05-15 03:08:56.242640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.257359] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.257377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.268565] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.268584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.277426] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.277444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.286088] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.286106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.300934] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.300952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.316405] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.316424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.330655] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.330674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.344701] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.344720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.353611] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.353630] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.367972] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.367990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.381916] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.381934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.395479] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.395497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.409598] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.409616] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.423315] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.423333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.436949] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.436968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.450794] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.450812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.464609] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.464628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.478709] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.478727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.341 [2024-05-15 03:08:56.492512] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.341 [2024-05-15 03:08:56.492530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.505847] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.505866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.520139] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.520157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.533931] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.533950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.542595] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.542613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.557070] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.557089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.565964] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.565982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.580116] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.580134] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.593606] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.593624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.602462] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.602484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.611806] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.611824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.620400] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.620418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.629798] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.629816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.643731] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.643749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.652769] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.652787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.666862] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.666880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.674280] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.674297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.683341] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.683358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.697273] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.697290] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.706261] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.706279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.720732] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.720751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.599 [2024-05-15 03:08:56.734593] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.599 [2024-05-15 03:08:56.734611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.600 [2024-05-15 03:08:56.743518] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.600 [2024-05-15 03:08:56.743535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.600 [2024-05-15 03:08:56.752918] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.600 [2024-05-15 03:08:56.752936] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.767424] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.767442] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.781625] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.781643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.792462] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.792486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.806925] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.806944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.820524] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.820543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.834280] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.834298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.848229] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.848248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.855825] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.855842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.864718] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.864735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.879018] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.879036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.888018] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.888036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.902228] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.902246] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.916067] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.916086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.925047] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.925064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.939081] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.939098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.947870] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.947887] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.957124] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.957141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.966216] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.966239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.974857] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.974875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.989323] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.989340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:56.998323] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:56.998340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.858 [2024-05-15 03:08:57.012726] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.858 [2024-05-15 03:08:57.012743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.026407] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.026425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.040491] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.040508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.054123] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.054141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.067836] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.067854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.081673] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.081690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.095257] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.095276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.104419] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.104437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.118688] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.118705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.132506] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.132523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.146417] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.146434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.155244] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.155261] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.169837] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.169855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.180506] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.180524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.189190] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.189207] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.198494] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.198515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.207100] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.207117] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.221388] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.221406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.234807] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.234825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.248423] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.248441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.262388] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.262406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.117 [2024-05-15 03:08:57.271345] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.117 [2024-05-15 03:08:57.271363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.285806] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.285824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.299661] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.299679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.308627] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.308644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.322611] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.322629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.336019] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.336036] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.349812] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.349830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.363587] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.363604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.372450] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.372483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.386559] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.386576] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.395132] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.395148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.404478] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.404497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.418748] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.418765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.432301] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.432322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.441367] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.441385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.450609] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.450626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.464996] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.465014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.478933] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.478951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.492653] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.492671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.506408] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.506426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.376 [2024-05-15 03:08:57.515247] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.376 [2024-05-15 03:08:57.515264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.377 [2024-05-15 03:08:57.529524] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.377 [2024-05-15 03:08:57.529542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.543212] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.543230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.556852] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.556869] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.570798] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.570815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.578216] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.578234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.592149] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.592167] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.606036] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.606055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.614996] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.615014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.629434] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.629453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.638244] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.638263] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.647070] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.647088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.669587] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.669610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.683237] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.683256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.692145] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.692163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.706914] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.706932] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.722818] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.722837] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.736748] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.736766] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.746030] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.746049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.754815] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.754833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.763584] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.763602] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.778256] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.778275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.635 [2024-05-15 03:08:57.791844] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.635 [2024-05-15 03:08:57.791862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.805767] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.805786] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 00:14:26.894 Latency(us) 00:14:26.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.894 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:26.894 Nvme1n1 : 5.01 16843.04 131.59 0.00 0.00 7591.95 3348.03 17780.20 00:14:26.894 =================================================================================================================== 00:14:26.894 Total : 16843.04 131.59 0.00 0.00 7591.95 3348.03 17780.20 00:14:26.894 [2024-05-15 03:08:57.815640] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.815658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.827667] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.827681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.839702] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.839717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.851736] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.851755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.863768] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.863782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.875794] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.875807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.887827] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.887840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.899855] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.899870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.911891] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.911905] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.923917] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.923927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.935957] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.935966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.947989] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.947999] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.960020] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.960033] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.972051] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.972061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.984083] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.984092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:57.996118] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:57.996129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:58.008147] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:58.008160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 [2024-05-15 03:08:58.020178] subsystem.c:1997:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.894 [2024-05-15 03:08:58.020188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1013155) - No such process 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1013155 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.894 delay0 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.894 03:08:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:27.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.152 [2024-05-15 03:08:58.146756] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:33.719 Initializing NVMe Controllers 00:14:33.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.719 Initialization complete. Launching workers. 00:14:33.719 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:14:33.719 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 40 00:14:33.719 success 217, unsuccess 179, failed 0 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.719 rmmod nvme_tcp 00:14:33.719 rmmod nvme_fabrics 00:14:33.719 rmmod nvme_keyring 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1011134 ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1011134 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1011134 ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1011134 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1011134 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1011134' 00:14:33.719 killing process with pid 1011134 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1011134 00:14:33.719 [2024-05-15 03:09:04.396177] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1011134 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.719 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.720 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.720 03:09:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.720 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.720 03:09:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.625 03:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.625 00:14:35.625 real 0m31.366s 00:14:35.625 user 0m43.228s 00:14:35.625 sys 0m10.160s 00:14:35.625 03:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:35.625 03:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:35.625 ************************************ 00:14:35.625 END TEST nvmf_zcopy 00:14:35.625 ************************************ 00:14:35.625 03:09:06 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:35.625 03:09:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:35.625 03:09:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:35.625 03:09:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.625 ************************************ 00:14:35.625 START TEST nvmf_nmic 00:14:35.625 ************************************ 00:14:35.625 03:09:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:35.885 * Looking for test storage... 00:14:35.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.885 03:09:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.886 03:09:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.160 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.160 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.160 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.161 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.161 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:14:41.161 00:14:41.161 --- 10.0.0.2 ping statistics --- 00:14:41.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.161 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:41.161 00:14:41.161 --- 10.0.0.1 ping statistics --- 00:14:41.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.161 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1019069 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1019069 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1019069 ']' 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.161 03:09:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.161 [2024-05-15 03:09:11.945621] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:41.161 [2024-05-15 03:09:11.945662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.161 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.161 [2024-05-15 03:09:12.003202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.161 [2024-05-15 03:09:12.083856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.161 [2024-05-15 03:09:12.083895] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.161 [2024-05-15 03:09:12.083902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.161 [2024-05-15 03:09:12.083908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.161 [2024-05-15 03:09:12.083913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.161 [2024-05-15 03:09:12.083965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.161 [2024-05-15 03:09:12.084059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.161 [2024-05-15 03:09:12.084280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.161 [2024-05-15 03:09:12.084282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.730 [2024-05-15 03:09:12.791305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.730 Malloc0 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.730 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 [2024-05-15 03:09:12.842895] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.731 [2024-05-15 03:09:12.843136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:41.731 test case1: single bdev can't be used in multiple subsystems 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 [2024-05-15 03:09:12.867025] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:41.731 [2024-05-15 03:09:12.867043] subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:41.731 [2024-05-15 03:09:12.867050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.731 request: 00:14:41.731 { 00:14:41.731 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:41.731 "namespace": { 00:14:41.731 "bdev_name": "Malloc0", 00:14:41.731 "no_auto_visible": false 00:14:41.731 }, 00:14:41.731 "method": "nvmf_subsystem_add_ns", 00:14:41.731 "req_id": 1 00:14:41.731 } 00:14:41.731 Got JSON-RPC error response 00:14:41.731 response: 00:14:41.731 { 00:14:41.731 "code": -32602, 00:14:41.731 "message": "Invalid parameters" 00:14:41.731 } 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:41.731 Adding namespace failed - expected result. 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:41.731 test case2: host connect to nvmf target in multiple paths 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.731 [2024-05-15 03:09:12.879125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.731 03:09:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.107 03:09:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:44.547 03:09:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.547 03:09:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:14:44.547 03:09:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.547 03:09:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:44.547 03:09:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:14:46.454 03:09:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:46.454 [global] 00:14:46.454 thread=1 00:14:46.454 invalidate=1 00:14:46.454 rw=write 00:14:46.454 time_based=1 00:14:46.454 runtime=1 00:14:46.454 ioengine=libaio 00:14:46.454 direct=1 00:14:46.454 bs=4096 00:14:46.454 iodepth=1 00:14:46.454 norandommap=0 00:14:46.454 numjobs=1 00:14:46.454 00:14:46.454 verify_dump=1 00:14:46.454 verify_backlog=512 00:14:46.454 verify_state_save=0 00:14:46.454 do_verify=1 00:14:46.454 verify=crc32c-intel 00:14:46.454 [job0] 00:14:46.454 filename=/dev/nvme0n1 00:14:46.454 Could not set queue depth (nvme0n1) 00:14:46.454 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:46.454 fio-3.35 00:14:46.454 Starting 1 thread 00:14:47.830 00:14:47.830 job0: (groupid=0, jobs=1): err= 0: pid=1020158: Wed May 15 03:09:18 2024 00:14:47.830 read: IOPS=378, BW=1512KiB/s (1549kB/s)(1532KiB/1013msec) 00:14:47.830 slat (nsec): min=6311, max=28996, avg=7853.04, stdev=3685.50 00:14:47.830 clat (usec): min=259, max=42073, avg=2335.87, stdev=8991.29 00:14:47.830 lat (usec): min=266, max=42096, avg=2343.72, stdev=8994.60 00:14:47.830 clat percentiles (usec): 00:14:47.830 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:14:47.830 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 285], 00:14:47.830 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 297], 95.00th=[ 469], 00:14:47.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:47.830 | 99.99th=[42206] 00:14:47.830 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:14:47.830 slat (usec): min=9, max=23867, avg=57.04, stdev=1054.36 00:14:47.830 clat (usec): min=132, max=435, avg=160.97, stdev=17.66 00:14:47.830 lat (usec): min=147, max=24184, avg=218.00, stdev=1061.39 00:14:47.830 clat percentiles (usec): 00:14:47.830 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:14:47.830 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:14:47.830 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 184], 00:14:47.830 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 437], 99.95th=[ 437], 00:14:47.830 | 99.99th=[ 437] 00:14:47.830 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:47.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:47.830 lat (usec) : 250=56.98%, 500=40.89% 00:14:47.830 lat (msec) : 50=2.12% 00:14:47.830 cpu : usr=0.30%, sys=0.89%, ctx=899, majf=0, minf=2 00:14:47.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.830 issued rwts: total=383,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.830 00:14:47.830 Run status group 0 (all jobs): 00:14:47.830 READ: bw=1512KiB/s (1549kB/s), 1512KiB/s-1512KiB/s (1549kB/s-1549kB/s), io=1532KiB (1569kB), run=1013-1013msec 00:14:47.830 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:14:47.830 00:14:47.830 Disk stats (read/write): 00:14:47.830 nvme0n1: ios=406/512, merge=0/0, ticks=1766/76, in_queue=1842, util=98.50% 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.830 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.830 rmmod nvme_tcp 00:14:47.830 rmmod nvme_fabrics 00:14:47.831 rmmod nvme_keyring 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1019069 ']' 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1019069 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1019069 ']' 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1019069 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:47.831 03:09:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1019069 00:14:48.090 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:48.090 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:48.090 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1019069' 00:14:48.090 killing process with pid 1019069 00:14:48.090 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1019069 00:14:48.090 [2024-05-15 03:09:19.025470] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:48.090 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1019069 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.348 03:09:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.256 03:09:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.256 00:14:50.256 real 0m14.580s 00:14:50.256 user 0m35.500s 00:14:50.256 sys 0m4.406s 00:14:50.256 03:09:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:50.256 03:09:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.256 ************************************ 00:14:50.256 END TEST nvmf_nmic 00:14:50.256 ************************************ 00:14:50.256 03:09:21 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.256 03:09:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:50.256 03:09:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:50.256 03:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.256 ************************************ 00:14:50.256 START TEST nvmf_fio_target 00:14:50.256 ************************************ 00:14:50.256 03:09:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.515 * Looking for test storage... 00:14:50.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.516 03:09:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:55.786 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:55.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:55.787 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:55.787 Found net devices under 0000:86:00.0: cvl_0_0 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:55.787 Found net devices under 0000:86:00.1: cvl_0_1 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:55.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:14:55.787 00:14:55.787 --- 10.0.0.2 ping statistics --- 00:14:55.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.787 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:14:55.787 00:14:55.787 --- 10.0.0.1 ping statistics --- 00:14:55.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.787 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1023691 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1023691 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1023691 ']' 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.787 03:09:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.787 [2024-05-15 03:09:26.461432] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:14:55.787 [2024-05-15 03:09:26.461480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.787 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.787 [2024-05-15 03:09:26.518078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.787 [2024-05-15 03:09:26.598571] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.787 [2024-05-15 03:09:26.598606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.787 [2024-05-15 03:09:26.598613] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.787 [2024-05-15 03:09:26.598619] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.787 [2024-05-15 03:09:26.598624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.787 [2024-05-15 03:09:26.598662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.787 [2024-05-15 03:09:26.598679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.787 [2024-05-15 03:09:26.598766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.787 [2024-05-15 03:09:26.598767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.353 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.611 [2024-05-15 03:09:27.520241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.611 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.611 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:56.611 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.870 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:56.870 03:09:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.129 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:57.129 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.387 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:57.387 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:57.387 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.646 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:57.646 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:57.904 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:57.904 03:09:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.163 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:58.163 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:58.163 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:58.421 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.421 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.679 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:58.679 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.679 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.937 [2024-05-15 03:09:29.970042] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:58.937 [2024-05-15 03:09:29.970282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.937 03:09:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:59.195 03:09:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:59.454 03:09:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:15:00.850 03:09:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:15:02.747 03:09:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:02.747 [global] 00:15:02.747 thread=1 00:15:02.747 invalidate=1 00:15:02.747 rw=write 00:15:02.747 time_based=1 00:15:02.747 runtime=1 00:15:02.747 ioengine=libaio 00:15:02.747 direct=1 00:15:02.747 bs=4096 00:15:02.747 iodepth=1 00:15:02.748 norandommap=0 00:15:02.748 numjobs=1 00:15:02.748 00:15:02.748 verify_dump=1 00:15:02.748 verify_backlog=512 00:15:02.748 verify_state_save=0 00:15:02.748 do_verify=1 00:15:02.748 verify=crc32c-intel 00:15:02.748 [job0] 00:15:02.748 filename=/dev/nvme0n1 00:15:02.748 [job1] 00:15:02.748 filename=/dev/nvme0n2 00:15:02.748 [job2] 00:15:02.748 filename=/dev/nvme0n3 00:15:02.748 [job3] 00:15:02.748 filename=/dev/nvme0n4 00:15:02.748 Could not set queue depth (nvme0n1) 00:15:02.748 Could not set queue depth (nvme0n2) 00:15:02.748 Could not set queue depth (nvme0n3) 00:15:02.748 Could not set queue depth (nvme0n4) 00:15:03.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.005 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.005 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.005 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.005 fio-3.35 00:15:03.005 Starting 4 threads 00:15:04.381 00:15:04.381 job0: (groupid=0, jobs=1): err= 0: pid=1025040: Wed May 15 03:09:35 2024 00:15:04.381 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:04.381 slat (nsec): min=6190, max=27805, avg=7183.14, stdev=1046.51 00:15:04.381 clat (usec): min=212, max=1395, avg=247.20, stdev=29.43 00:15:04.381 lat (usec): min=219, max=1402, avg=254.38, stdev=29.47 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:15:04.381 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:15:04.381 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:15:04.381 | 99.00th=[ 281], 99.50th=[ 343], 99.90th=[ 392], 99.95th=[ 469], 00:15:04.381 | 99.99th=[ 1401] 00:15:04.381 write: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(9.89MiB/1001msec); 0 zone resets 00:15:04.381 slat (nsec): min=9344, max=92144, avg=10833.87, stdev=3066.16 00:15:04.381 clat (usec): min=137, max=413, avg=174.09, stdev=19.26 00:15:04.381 lat (usec): min=146, max=438, avg=184.92, stdev=20.13 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:15:04.381 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:15:04.381 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 210], 00:15:04.381 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 302], 99.95th=[ 343], 00:15:04.381 | 99.99th=[ 412] 00:15:04.381 bw ( KiB/s): min= 9744, max= 9744, per=37.12%, avg=9744.00, stdev= 0.00, samples=1 00:15:04.381 iops : min= 2436, max= 2436, avg=2436.00, stdev= 0.00, samples=1 00:15:04.381 lat (usec) : 250=83.93%, 500=16.04% 00:15:04.381 lat (msec) : 2=0.02% 00:15:04.381 cpu : usr=2.60%, sys=4.00%, ctx=4583, majf=0, minf=1 00:15:04.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 issued rwts: total=2048,2533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.381 job1: (groupid=0, jobs=1): err= 0: pid=1025041: Wed May 15 03:09:35 2024 00:15:04.381 read: IOPS=1692, BW=6769KiB/s (6932kB/s)(6776KiB/1001msec) 00:15:04.381 slat (nsec): min=6905, max=36662, avg=7892.85, stdev=1254.45 00:15:04.381 clat (usec): min=269, max=547, avg=329.92, stdev=46.79 00:15:04.381 lat (usec): min=276, max=555, avg=337.81, stdev=46.80 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:15:04.381 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:15:04.381 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 404], 00:15:04.381 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 537], 99.95th=[ 545], 00:15:04.381 | 99.99th=[ 545] 00:15:04.381 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:04.381 slat (nsec): min=10074, max=69614, avg=11470.81, stdev=1876.52 00:15:04.381 clat (usec): min=147, max=371, avg=191.69, stdev=27.30 00:15:04.381 lat (usec): min=158, max=383, avg=203.16, stdev=27.51 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:15:04.381 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:15:04.381 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 251], 00:15:04.381 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 371], 00:15:04.381 | 99.99th=[ 371] 00:15:04.381 bw ( KiB/s): min= 8192, max= 8192, per=31.20%, avg=8192.00, stdev= 0.00, samples=1 00:15:04.381 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:04.381 lat (usec) : 250=51.98%, 500=47.19%, 750=0.83% 00:15:04.381 cpu : usr=3.30%, sys=5.80%, ctx=3743, majf=0, minf=1 00:15:04.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 issued rwts: total=1694,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.381 job2: (groupid=0, jobs=1): err= 0: pid=1025042: Wed May 15 03:09:35 2024 00:15:04.381 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:15:04.381 slat (nsec): min=9417, max=23822, avg=13556.87, stdev=4586.87 00:15:04.381 clat (usec): min=368, max=41042, avg=39196.01, stdev=8464.57 00:15:04.381 lat (usec): min=390, max=41057, avg=39209.57, stdev=8462.89 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:04.381 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:04.381 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:04.381 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:04.381 | 99.99th=[41157] 00:15:04.381 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:15:04.381 slat (nsec): min=10385, max=36275, avg=12291.62, stdev=1943.33 00:15:04.381 clat (usec): min=168, max=325, avg=195.32, stdev=14.84 00:15:04.381 lat (usec): min=179, max=337, avg=207.61, stdev=15.13 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:15:04.381 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:15:04.381 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:15:04.381 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 326], 99.95th=[ 326], 00:15:04.381 | 99.99th=[ 326] 00:15:04.381 bw ( KiB/s): min= 4096, max= 4096, per=15.60%, avg=4096.00, stdev= 0.00, samples=1 00:15:04.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:04.381 lat (usec) : 250=94.39%, 500=1.50% 00:15:04.381 lat (msec) : 50=4.11% 00:15:04.381 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:15:04.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.381 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.381 job3: (groupid=0, jobs=1): err= 0: pid=1025043: Wed May 15 03:09:35 2024 00:15:04.381 read: IOPS=1485, BW=5942KiB/s (6085kB/s)(5948KiB/1001msec) 00:15:04.381 slat (nsec): min=6468, max=24351, avg=7382.79, stdev=964.99 00:15:04.381 clat (usec): min=261, max=41023, avg=452.51, stdev=2104.12 00:15:04.381 lat (usec): min=268, max=41032, avg=459.89, stdev=2104.75 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:15:04.381 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 355], 00:15:04.381 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 404], 00:15:04.381 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41157], 00:15:04.381 | 99.99th=[41157] 00:15:04.381 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:04.381 slat (nsec): min=9205, max=58053, avg=10364.29, stdev=1639.95 00:15:04.381 clat (usec): min=154, max=396, avg=191.09, stdev=23.45 00:15:04.381 lat (usec): min=164, max=454, avg=201.46, stdev=23.81 00:15:04.381 clat percentiles (usec): 00:15:04.381 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:15:04.381 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:15:04.381 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 237], 00:15:04.381 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 396], 00:15:04.381 | 99.99th=[ 396] 00:15:04.381 bw ( KiB/s): min= 5632, max= 5632, per=21.45%, avg=5632.00, stdev= 0.00, samples=1 00:15:04.381 iops : min= 1408, max= 1408, avg=1408.00, stdev= 0.00, samples=1 00:15:04.381 lat (usec) : 250=49.06%, 500=50.65%, 750=0.17% 00:15:04.382 lat (msec) : 50=0.13% 00:15:04.382 cpu : usr=1.80%, sys=2.60%, ctx=3024, majf=0, minf=2 00:15:04.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.382 issued rwts: total=1487,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.382 00:15:04.382 Run status group 0 (all jobs): 00:15:04.382 READ: bw=20.3MiB/s (21.3MB/s), 91.1KiB/s-8184KiB/s (93.3kB/s-8380kB/s), io=20.5MiB (21.5MB), run=1001-1010msec 00:15:04.382 WRITE: bw=25.6MiB/s (26.9MB/s), 2028KiB/s-9.88MiB/s (2076kB/s-10.4MB/s), io=25.9MiB (27.2MB), run=1001-1010msec 00:15:04.382 00:15:04.382 Disk stats (read/write): 00:15:04.382 nvme0n1: ios=1839/2048, merge=0/0, ticks=1423/350, in_queue=1773, util=98.30% 00:15:04.382 nvme0n2: ios=1554/1570, merge=0/0, ticks=618/281, in_queue=899, util=91.06% 00:15:04.382 nvme0n3: ios=19/512, merge=0/0, ticks=738/93, in_queue=831, util=89.05% 00:15:04.382 nvme0n4: ios=1070/1536, merge=0/0, ticks=519/286, in_queue=805, util=89.71% 00:15:04.382 03:09:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:04.382 [global] 00:15:04.382 thread=1 00:15:04.382 invalidate=1 00:15:04.382 rw=randwrite 00:15:04.382 time_based=1 00:15:04.382 runtime=1 00:15:04.382 ioengine=libaio 00:15:04.382 direct=1 00:15:04.382 bs=4096 00:15:04.382 iodepth=1 00:15:04.382 norandommap=0 00:15:04.382 numjobs=1 00:15:04.382 00:15:04.382 verify_dump=1 00:15:04.382 verify_backlog=512 00:15:04.382 verify_state_save=0 00:15:04.382 do_verify=1 00:15:04.382 verify=crc32c-intel 00:15:04.382 [job0] 00:15:04.382 filename=/dev/nvme0n1 00:15:04.382 [job1] 00:15:04.382 filename=/dev/nvme0n2 00:15:04.382 [job2] 00:15:04.382 filename=/dev/nvme0n3 00:15:04.382 [job3] 00:15:04.382 filename=/dev/nvme0n4 00:15:04.382 Could not set queue depth (nvme0n1) 00:15:04.382 Could not set queue depth (nvme0n2) 00:15:04.382 Could not set queue depth (nvme0n3) 00:15:04.382 Could not set queue depth (nvme0n4) 00:15:04.382 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.382 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.382 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.382 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.382 fio-3.35 00:15:04.382 Starting 4 threads 00:15:05.758 00:15:05.758 job0: (groupid=0, jobs=1): err= 0: pid=1025433: Wed May 15 03:09:36 2024 00:15:05.758 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:15:05.758 slat (nsec): min=9429, max=23450, avg=22386.32, stdev=2904.32 00:15:05.758 clat (usec): min=25456, max=41982, avg=40460.76, stdev=3373.58 00:15:05.758 lat (usec): min=25479, max=42005, avg=40483.15, stdev=3373.37 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[25560], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:05.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:05.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:05.758 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:05.758 | 99.99th=[42206] 00:15:05.758 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:15:05.758 slat (nsec): min=5462, max=40031, avg=10376.69, stdev=2550.39 00:15:05.758 clat (usec): min=151, max=372, avg=206.82, stdev=28.88 00:15:05.758 lat (usec): min=161, max=393, avg=217.20, stdev=29.73 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:15:05.758 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:15:05.758 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 247], 00:15:05.758 | 99.00th=[ 285], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 371], 00:15:05.758 | 99.99th=[ 371] 00:15:05.758 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:15:05.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:05.758 lat (usec) : 250=92.70%, 500=3.18% 00:15:05.758 lat (msec) : 50=4.12% 00:15:05.758 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=1 00:15:05.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.758 job1: (groupid=0, jobs=1): err= 0: pid=1025452: Wed May 15 03:09:36 2024 00:15:05.758 read: IOPS=2193, BW=8775KiB/s (8986kB/s)(8784KiB/1001msec) 00:15:05.758 slat (nsec): min=6090, max=29043, avg=6912.12, stdev=1126.18 00:15:05.758 clat (usec): min=191, max=357, avg=233.04, stdev=15.77 00:15:05.758 lat (usec): min=205, max=367, avg=239.95, stdev=15.87 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:15:05.758 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:15:05.758 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:15:05.758 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 343], 00:15:05.758 | 99.99th=[ 359] 00:15:05.758 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:05.758 slat (nsec): min=9027, max=35591, avg=10402.65, stdev=1986.82 00:15:05.758 clat (usec): min=122, max=402, avg=169.97, stdev=25.06 00:15:05.758 lat (usec): min=141, max=435, avg=180.37, stdev=25.78 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:15:05.758 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 172], 00:15:05.758 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:15:05.758 | 99.00th=[ 247], 99.50th=[ 265], 99.90th=[ 334], 99.95th=[ 375], 00:15:05.758 | 99.99th=[ 404] 00:15:05.758 bw ( KiB/s): min=11264, max=11264, per=69.99%, avg=11264.00, stdev= 0.00, samples=1 00:15:05.758 iops : min= 2816, max= 2816, avg=2816.00, stdev= 0.00, samples=1 00:15:05.758 lat (usec) : 250=92.62%, 500=7.38% 00:15:05.758 cpu : usr=2.30%, sys=4.40%, ctx=4757, majf=0, minf=1 00:15:05.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 issued rwts: total=2196,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.758 job2: (groupid=0, jobs=1): err= 0: pid=1025484: Wed May 15 03:09:36 2024 00:15:05.758 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:15:05.758 slat (nsec): min=10729, max=24419, avg=22270.59, stdev=2655.74 00:15:05.758 clat (usec): min=40847, max=41083, avg=40965.33, stdev=48.40 00:15:05.758 lat (usec): min=40870, max=41105, avg=40987.60, stdev=49.01 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:05.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:05.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:05.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:05.758 | 99.99th=[41157] 00:15:05.758 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:15:05.758 slat (nsec): min=10682, max=64614, avg=12517.18, stdev=3202.30 00:15:05.758 clat (usec): min=158, max=677, avg=208.45, stdev=44.34 00:15:05.758 lat (usec): min=170, max=688, avg=220.96, stdev=44.62 00:15:05.758 clat percentiles (usec): 00:15:05.758 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:15:05.758 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:15:05.758 | 70.00th=[ 210], 80.00th=[ 231], 90.00th=[ 253], 95.00th=[ 277], 00:15:05.758 | 99.00th=[ 322], 99.50th=[ 482], 99.90th=[ 676], 99.95th=[ 676], 00:15:05.758 | 99.99th=[ 676] 00:15:05.758 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:15:05.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:05.758 lat (usec) : 250=84.83%, 500=10.67%, 750=0.37% 00:15:05.758 lat (msec) : 50=4.12% 00:15:05.758 cpu : usr=0.49%, sys=0.88%, ctx=535, majf=0, minf=1 00:15:05.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.758 job3: (groupid=0, jobs=1): err= 0: pid=1025494: Wed May 15 03:09:36 2024 00:15:05.758 read: IOPS=23, BW=94.4KiB/s (96.7kB/s)(96.0KiB/1017msec) 00:15:05.758 slat (nsec): min=8766, max=23632, avg=21211.67, stdev=4693.60 00:15:05.758 clat (usec): min=330, max=42010, avg=37799.13, stdev=11533.48 00:15:05.759 lat (usec): min=352, max=42033, avg=37820.34, stdev=11533.16 00:15:05.759 clat percentiles (usec): 00:15:05.759 | 1.00th=[ 330], 5.00th=[ 424], 10.00th=[40633], 20.00th=[40633], 00:15:05.759 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:05.759 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:05.759 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:05.759 | 99.99th=[42206] 00:15:05.759 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:15:05.759 slat (nsec): min=9132, max=40730, avg=10349.44, stdev=1713.20 00:15:05.759 clat (usec): min=157, max=1277, avg=197.96, stdev=50.61 00:15:05.759 lat (usec): min=167, max=1287, avg=208.31, stdev=50.78 00:15:05.759 clat percentiles (usec): 00:15:05.759 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:15:05.759 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:15:05.759 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 219], 00:15:05.759 | 99.00th=[ 251], 99.50th=[ 338], 99.90th=[ 1270], 99.95th=[ 1270], 00:15:05.759 | 99.99th=[ 1270] 00:15:05.759 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:15:05.759 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:05.759 lat (usec) : 250=94.40%, 500=1.31% 00:15:05.759 lat (msec) : 2=0.19%, 50=4.10% 00:15:05.759 cpu : usr=0.00%, sys=0.79%, ctx=538, majf=0, minf=2 00:15:05.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.759 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.759 00:15:05.759 Run status group 0 (all jobs): 00:15:05.759 READ: bw=8896KiB/s (9109kB/s), 86.4KiB/s-8775KiB/s (88.5kB/s-8986kB/s), io=9056KiB (9273kB), run=1001-1018msec 00:15:05.759 WRITE: bw=15.7MiB/s (16.5MB/s), 2012KiB/s-9.99MiB/s (2060kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1018msec 00:15:05.759 00:15:05.759 Disk stats (read/write): 00:15:05.759 nvme0n1: ios=68/512, merge=0/0, ticks=893/104, in_queue=997, util=97.59% 00:15:05.759 nvme0n2: ios=1799/2048, merge=0/0, ticks=1315/354, in_queue=1669, util=98.66% 00:15:05.759 nvme0n3: ios=74/512, merge=0/0, ticks=839/99, in_queue=938, util=97.51% 00:15:05.759 nvme0n4: ios=56/512, merge=0/0, ticks=1503/98, in_queue=1601, util=99.56% 00:15:05.759 03:09:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:05.759 [global] 00:15:05.759 thread=1 00:15:05.759 invalidate=1 00:15:05.759 rw=write 00:15:05.759 time_based=1 00:15:05.759 runtime=1 00:15:05.759 ioengine=libaio 00:15:05.759 direct=1 00:15:05.759 bs=4096 00:15:05.759 iodepth=128 00:15:05.759 norandommap=0 00:15:05.759 numjobs=1 00:15:05.759 00:15:05.759 verify_dump=1 00:15:05.759 verify_backlog=512 00:15:05.759 verify_state_save=0 00:15:05.759 do_verify=1 00:15:05.759 verify=crc32c-intel 00:15:05.759 [job0] 00:15:05.759 filename=/dev/nvme0n1 00:15:05.759 [job1] 00:15:05.759 filename=/dev/nvme0n2 00:15:05.759 [job2] 00:15:05.759 filename=/dev/nvme0n3 00:15:05.759 [job3] 00:15:05.759 filename=/dev/nvme0n4 00:15:05.759 Could not set queue depth (nvme0n1) 00:15:05.759 Could not set queue depth (nvme0n2) 00:15:05.759 Could not set queue depth (nvme0n3) 00:15:05.759 Could not set queue depth (nvme0n4) 00:15:06.018 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.018 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.018 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.018 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:06.018 fio-3.35 00:15:06.018 Starting 4 threads 00:15:07.415 00:15:07.415 job0: (groupid=0, jobs=1): err= 0: pid=1025905: Wed May 15 03:09:38 2024 00:15:07.415 read: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec) 00:15:07.415 slat (nsec): min=1342, max=27443k, avg=210395.54, stdev=1611118.98 00:15:07.415 clat (usec): min=8948, max=68317, avg=28011.89, stdev=16233.86 00:15:07.415 lat (usec): min=8957, max=83060, avg=28222.28, stdev=16415.76 00:15:07.415 clat percentiles (usec): 00:15:07.415 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10290], 20.00th=[10945], 00:15:07.415 | 30.00th=[11863], 40.00th=[15270], 50.00th=[26870], 60.00th=[35914], 00:15:07.415 | 70.00th=[39584], 80.00th=[43779], 90.00th=[51119], 95.00th=[53216], 00:15:07.415 | 99.00th=[62653], 99.50th=[63177], 99.90th=[66847], 99.95th=[67634], 00:15:07.415 | 99.99th=[68682] 00:15:07.415 write: IOPS=2367, BW=9469KiB/s (9696kB/s)(9592KiB/1013msec); 0 zone resets 00:15:07.415 slat (usec): min=2, max=20843, avg=234.84, stdev=1333.25 00:15:07.415 clat (usec): min=2269, max=92579, avg=29555.46, stdev=21130.91 00:15:07.415 lat (usec): min=5510, max=97258, avg=29790.30, stdev=21273.15 00:15:07.415 clat percentiles (usec): 00:15:07.415 | 1.00th=[ 7504], 5.00th=[ 9896], 10.00th=[16581], 20.00th=[17171], 00:15:07.415 | 30.00th=[17433], 40.00th=[17695], 50.00th=[19268], 60.00th=[25822], 00:15:07.415 | 70.00th=[26608], 80.00th=[38536], 90.00th=[69731], 95.00th=[85459], 00:15:07.415 | 99.00th=[91751], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:15:07.415 | 99.99th=[92799] 00:15:07.415 bw ( KiB/s): min= 8616, max= 9544, per=14.86%, avg=9080.00, stdev=656.20, samples=2 00:15:07.415 iops : min= 2154, max= 2386, avg=2270.00, stdev=164.05, samples=2 00:15:07.415 lat (msec) : 4=0.02%, 10=4.43%, 20=44.17%, 50=38.98%, 100=12.39% 00:15:07.415 cpu : usr=2.57%, sys=2.77%, ctx=210, majf=0, minf=1 00:15:07.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:07.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.415 issued rwts: total=2048,2398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.415 job1: (groupid=0, jobs=1): err= 0: pid=1025919: Wed May 15 03:09:38 2024 00:15:07.415 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:15:07.415 slat (nsec): min=1339, max=16467k, avg=112113.88, stdev=803374.96 00:15:07.415 clat (usec): min=3478, max=90496, avg=12647.96, stdev=10407.13 00:15:07.415 lat (usec): min=3485, max=90503, avg=12760.07, stdev=10502.01 00:15:07.415 clat percentiles (usec): 00:15:07.415 | 1.00th=[ 6128], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:15:07.415 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:15:07.415 | 70.00th=[10290], 80.00th=[13173], 90.00th=[19268], 95.00th=[23200], 00:15:07.415 | 99.00th=[69731], 99.50th=[81265], 99.90th=[90702], 99.95th=[90702], 00:15:07.415 | 99.99th=[90702] 00:15:07.415 write: IOPS=4870, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1006msec); 0 zone resets 00:15:07.415 slat (usec): min=2, max=9205, avg=92.22, stdev=574.32 00:15:07.415 clat (usec): min=1420, max=90497, avg=14139.69, stdev=11164.57 00:15:07.415 lat (usec): min=1435, max=90509, avg=14231.92, stdev=11215.63 00:15:07.415 clat percentiles (usec): 00:15:07.415 | 1.00th=[ 3425], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 7635], 00:15:07.415 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[11994], 00:15:07.415 | 70.00th=[16581], 80.00th=[17695], 90.00th=[26608], 95.00th=[42206], 00:15:07.415 | 99.00th=[51643], 99.50th=[58983], 99.90th=[74974], 99.95th=[74974], 00:15:07.415 | 99.99th=[90702] 00:15:07.415 bw ( KiB/s): min=17112, max=21072, per=31.24%, avg=19092.00, stdev=2800.14, samples=2 00:15:07.415 iops : min= 4278, max= 5268, avg=4773.00, stdev=700.04, samples=2 00:15:07.415 lat (msec) : 2=0.06%, 4=0.76%, 10=59.73%, 20=29.04%, 50=8.44% 00:15:07.415 lat (msec) : 100=1.98% 00:15:07.415 cpu : usr=4.28%, sys=5.47%, ctx=395, majf=0, minf=1 00:15:07.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:07.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.416 issued rwts: total=4608,4900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.416 job2: (groupid=0, jobs=1): err= 0: pid=1025938: Wed May 15 03:09:38 2024 00:15:07.416 read: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1014msec) 00:15:07.416 slat (nsec): min=1266, max=10072k, avg=119287.77, stdev=804640.92 00:15:07.416 clat (usec): min=4065, max=38885, avg=13076.80, stdev=4625.58 00:15:07.416 lat (usec): min=4075, max=38896, avg=13196.08, stdev=4692.55 00:15:07.416 clat percentiles (usec): 00:15:07.416 | 1.00th=[ 5800], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:15:07.416 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:15:07.416 | 70.00th=[13960], 80.00th=[16319], 90.00th=[19530], 95.00th=[23462], 00:15:07.416 | 99.00th=[27395], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:15:07.416 | 99.99th=[39060] 00:15:07.416 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:15:07.416 slat (usec): min=2, max=8998, avg=219.31, stdev=996.30 00:15:07.416 clat (usec): min=1596, max=102769, avg=30793.22, stdev=25153.96 00:15:07.416 lat (usec): min=1611, max=102783, avg=31012.53, stdev=25311.18 00:15:07.416 clat percentiles (msec): 00:15:07.416 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:15:07.416 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 23], 00:15:07.416 | 70.00th=[ 39], 80.00th=[ 48], 90.00th=[ 75], 95.00th=[ 87], 00:15:07.416 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 103], 00:15:07.416 | 99.99th=[ 104] 00:15:07.416 bw ( KiB/s): min=10504, max=13360, per=19.53%, avg=11932.00, stdev=2019.50, samples=2 00:15:07.416 iops : min= 2626, max= 3340, avg=2983.00, stdev=504.87, samples=2 00:15:07.416 lat (msec) : 2=0.05%, 4=0.71%, 10=20.35%, 20=52.52%, 50=15.93% 00:15:07.416 lat (msec) : 100=10.32%, 250=0.12% 00:15:07.416 cpu : usr=3.16%, sys=2.67%, ctx=384, majf=0, minf=1 00:15:07.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:07.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.416 issued rwts: total=2598,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.416 job3: (groupid=0, jobs=1): err= 0: pid=1025945: Wed May 15 03:09:38 2024 00:15:07.416 read: IOPS=4916, BW=19.2MiB/s (20.1MB/s)(19.5MiB/1013msec) 00:15:07.416 slat (nsec): min=1485, max=13206k, avg=108741.35, stdev=817063.23 00:15:07.416 clat (usec): min=1619, max=48737, avg=13194.93, stdev=4863.22 00:15:07.416 lat (usec): min=4990, max=48743, avg=13303.67, stdev=4952.62 00:15:07.416 clat percentiles (usec): 00:15:07.416 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:15:07.416 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11863], 60.00th=[12911], 00:15:07.416 | 70.00th=[13960], 80.00th=[15139], 90.00th=[17171], 95.00th=[20317], 00:15:07.416 | 99.00th=[36963], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:15:07.416 | 99.99th=[48497] 00:15:07.416 write: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1013msec); 0 zone resets 00:15:07.416 slat (usec): min=2, max=11246, avg=84.87, stdev=620.53 00:15:07.416 clat (usec): min=3036, max=48725, avg=12215.84, stdev=6129.84 00:15:07.416 lat (usec): min=3047, max=48730, avg=12300.72, stdev=6176.07 00:15:07.416 clat percentiles (usec): 00:15:07.416 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 8356], 00:15:07.416 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11731], 00:15:07.416 | 70.00th=[12387], 80.00th=[14091], 90.00th=[17695], 95.00th=[26084], 00:15:07.416 | 99.00th=[38011], 99.50th=[39060], 99.90th=[40633], 99.95th=[41157], 00:15:07.416 | 99.99th=[48497] 00:15:07.416 bw ( KiB/s): min=20480, max=20480, per=33.52%, avg=20480.00, stdev= 0.00, samples=2 00:15:07.416 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:15:07.416 lat (msec) : 2=0.01%, 4=0.26%, 10=29.87%, 20=63.97%, 50=5.89% 00:15:07.416 cpu : usr=4.74%, sys=6.03%, ctx=305, majf=0, minf=1 00:15:07.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:07.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.416 issued rwts: total=4980,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.416 00:15:07.416 Run status group 0 (all jobs): 00:15:07.416 READ: bw=54.8MiB/s (57.5MB/s), 8087KiB/s-19.2MiB/s (8281kB/s-20.1MB/s), io=55.6MiB (58.3MB), run=1006-1014msec 00:15:07.416 WRITE: bw=59.7MiB/s (62.6MB/s), 9469KiB/s-19.7MiB/s (9696kB/s-20.7MB/s), io=60.5MiB (63.4MB), run=1006-1014msec 00:15:07.416 00:15:07.416 Disk stats (read/write): 00:15:07.416 nvme0n1: ios=1585/1679, merge=0/0, ticks=24203/26655, in_queue=50858, util=85.97% 00:15:07.416 nvme0n2: ios=3605/3591, merge=0/0, ticks=46765/57196, in_queue=103961, util=87.98% 00:15:07.416 nvme0n3: ios=2560/2767, merge=0/0, ticks=31922/70137, in_queue=102059, util=88.80% 00:15:07.416 nvme0n4: ios=4297/4608, merge=0/0, ticks=53210/49712, in_queue=102922, util=97.15% 00:15:07.416 03:09:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:07.416 [global] 00:15:07.416 thread=1 00:15:07.416 invalidate=1 00:15:07.416 rw=randwrite 00:15:07.416 time_based=1 00:15:07.416 runtime=1 00:15:07.416 ioengine=libaio 00:15:07.416 direct=1 00:15:07.416 bs=4096 00:15:07.416 iodepth=128 00:15:07.416 norandommap=0 00:15:07.416 numjobs=1 00:15:07.416 00:15:07.416 verify_dump=1 00:15:07.416 verify_backlog=512 00:15:07.416 verify_state_save=0 00:15:07.416 do_verify=1 00:15:07.416 verify=crc32c-intel 00:15:07.416 [job0] 00:15:07.416 filename=/dev/nvme0n1 00:15:07.416 [job1] 00:15:07.416 filename=/dev/nvme0n2 00:15:07.416 [job2] 00:15:07.416 filename=/dev/nvme0n3 00:15:07.416 [job3] 00:15:07.416 filename=/dev/nvme0n4 00:15:07.416 Could not set queue depth (nvme0n1) 00:15:07.416 Could not set queue depth (nvme0n2) 00:15:07.416 Could not set queue depth (nvme0n3) 00:15:07.416 Could not set queue depth (nvme0n4) 00:15:07.684 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.684 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.684 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.684 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.684 fio-3.35 00:15:07.684 Starting 4 threads 00:15:09.053 00:15:09.053 job0: (groupid=0, jobs=1): err= 0: pid=1026372: Wed May 15 03:09:39 2024 00:15:09.053 read: IOPS=3910, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1003msec) 00:15:09.053 slat (nsec): min=995, max=16192k, avg=134800.83, stdev=816111.27 00:15:09.053 clat (usec): min=945, max=49137, avg=16976.13, stdev=9743.86 00:15:09.053 lat (usec): min=3322, max=49896, avg=17110.93, stdev=9797.01 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 8455], 20.00th=[ 9634], 00:15:09.053 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11731], 60.00th=[17171], 00:15:09.053 | 70.00th=[20579], 80.00th=[24249], 90.00th=[34341], 95.00th=[37487], 00:15:09.053 | 99.00th=[42730], 99.50th=[45876], 99.90th=[49021], 99.95th=[49021], 00:15:09.053 | 99.99th=[49021] 00:15:09.053 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:15:09.053 slat (nsec): min=1772, max=6575.7k, avg=110703.42, stdev=524492.34 00:15:09.053 clat (usec): min=5454, max=42847, avg=14623.30, stdev=8157.76 00:15:09.053 lat (usec): min=5478, max=42852, avg=14734.00, stdev=8204.37 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 5669], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[10028], 00:15:09.053 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[11207], 00:15:09.053 | 70.00th=[13042], 80.00th=[20579], 90.00th=[27395], 95.00th=[35914], 00:15:09.053 | 99.00th=[40633], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:09.053 | 99.99th=[42730] 00:15:09.053 bw ( KiB/s): min=12288, max=20480, per=23.76%, avg=16384.00, stdev=5792.62, samples=2 00:15:09.053 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:09.053 lat (usec) : 1000=0.01% 00:15:09.053 lat (msec) : 4=0.35%, 10=21.60%, 20=49.94%, 50=28.10% 00:15:09.053 cpu : usr=2.30%, sys=3.39%, ctx=478, majf=0, minf=1 00:15:09.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:09.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.053 issued rwts: total=3922,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.053 job1: (groupid=0, jobs=1): err= 0: pid=1026379: Wed May 15 03:09:39 2024 00:15:09.053 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:15:09.053 slat (nsec): min=1129, max=14558k, avg=124471.78, stdev=754371.46 00:15:09.053 clat (usec): min=7021, max=53348, avg=16010.60, stdev=5744.97 00:15:09.053 lat (usec): min=7026, max=55079, avg=16135.07, stdev=5799.43 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 7308], 5.00th=[ 8291], 10.00th=[10421], 20.00th=[10945], 00:15:09.053 | 30.00th=[13698], 40.00th=[14746], 50.00th=[15008], 60.00th=[15926], 00:15:09.053 | 70.00th=[17695], 80.00th=[19006], 90.00th=[22152], 95.00th=[24511], 00:15:09.053 | 99.00th=[35390], 99.50th=[43254], 99.90th=[53216], 99.95th=[53216], 00:15:09.053 | 99.99th=[53216] 00:15:09.053 write: IOPS=4033, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1006msec); 0 zone resets 00:15:09.053 slat (nsec): min=1826, max=14301k, avg=131233.24, stdev=862770.05 00:15:09.053 clat (usec): min=1181, max=55019, avg=17249.99, stdev=7313.24 00:15:09.053 lat (usec): min=1191, max=55025, avg=17381.22, stdev=7369.53 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 6915], 5.00th=[10028], 10.00th=[11076], 20.00th=[13042], 00:15:09.053 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15401], 60.00th=[16188], 00:15:09.053 | 70.00th=[17695], 80.00th=[20317], 90.00th=[25560], 95.00th=[32113], 00:15:09.053 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:15:09.053 | 99.99th=[54789] 00:15:09.053 bw ( KiB/s): min=15064, max=16384, per=22.80%, avg=15724.00, stdev=933.38, samples=2 00:15:09.053 iops : min= 3766, max= 4096, avg=3931.00, stdev=233.35, samples=2 00:15:09.053 lat (msec) : 2=0.09%, 10=5.93%, 20=76.17%, 50=16.78%, 100=1.03% 00:15:09.053 cpu : usr=2.29%, sys=4.38%, ctx=280, majf=0, minf=1 00:15:09.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:09.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.053 issued rwts: total=3584,4058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.053 job2: (groupid=0, jobs=1): err= 0: pid=1026380: Wed May 15 03:09:39 2024 00:15:09.053 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:15:09.053 slat (nsec): min=1255, max=14850k, avg=123798.34, stdev=742842.88 00:15:09.053 clat (usec): min=6198, max=41050, avg=15535.26, stdev=5635.48 00:15:09.053 lat (usec): min=6204, max=41666, avg=15659.06, stdev=5705.10 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11600], 00:15:09.053 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13960], 60.00th=[15139], 00:15:09.053 | 70.00th=[15664], 80.00th=[19006], 90.00th=[25035], 95.00th=[26346], 00:15:09.053 | 99.00th=[34341], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:15:09.053 | 99.99th=[41157] 00:15:09.053 write: IOPS=4163, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1004msec); 0 zone resets 00:15:09.053 slat (usec): min=2, max=21400, avg=112.86, stdev=800.73 00:15:09.053 clat (usec): min=402, max=69521, avg=15185.97, stdev=8488.35 00:15:09.053 lat (usec): min=1274, max=83421, avg=15298.84, stdev=8567.69 00:15:09.053 clat percentiles (usec): 00:15:09.053 | 1.00th=[ 4359], 5.00th=[ 7308], 10.00th=[ 9503], 20.00th=[11207], 00:15:09.053 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[13829], 00:15:09.053 | 70.00th=[15926], 80.00th=[17957], 90.00th=[22152], 95.00th=[32637], 00:15:09.053 | 99.00th=[56361], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:15:09.053 | 99.99th=[69731] 00:15:09.053 bw ( KiB/s): min=13576, max=19192, per=23.76%, avg=16384.00, stdev=3971.11, samples=2 00:15:09.053 iops : min= 3394, max= 4798, avg=4096.00, stdev=992.78, samples=2 00:15:09.053 lat (usec) : 500=0.01% 00:15:09.053 lat (msec) : 2=0.02%, 4=0.11%, 10=9.32%, 20=72.28%, 50=17.51% 00:15:09.053 lat (msec) : 100=0.75% 00:15:09.053 cpu : usr=2.49%, sys=4.99%, ctx=306, majf=0, minf=1 00:15:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.054 issued rwts: total=4096,4180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.054 job3: (groupid=0, jobs=1): err= 0: pid=1026381: Wed May 15 03:09:39 2024 00:15:09.054 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:15:09.054 slat (nsec): min=1106, max=11648k, avg=101747.90, stdev=659112.88 00:15:09.054 clat (usec): min=4151, max=32024, avg=12630.96, stdev=3591.26 00:15:09.054 lat (usec): min=4157, max=32027, avg=12732.70, stdev=3623.57 00:15:09.054 clat percentiles (usec): 00:15:09.054 | 1.00th=[ 4293], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[10683], 00:15:09.054 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12518], 00:15:09.054 | 70.00th=[13435], 80.00th=[14877], 90.00th=[16581], 95.00th=[18220], 00:15:09.054 | 99.00th=[24773], 99.50th=[27395], 99.90th=[32113], 99.95th=[32113], 00:15:09.054 | 99.99th=[32113] 00:15:09.054 write: IOPS=5027, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1010msec); 0 zone resets 00:15:09.054 slat (nsec): min=1777, max=14263k, avg=97066.47, stdev=521365.27 00:15:09.054 clat (usec): min=1151, max=59301, avg=13776.62, stdev=7068.35 00:15:09.054 lat (usec): min=1161, max=59310, avg=13873.69, stdev=7101.99 00:15:09.054 clat percentiles (usec): 00:15:09.054 | 1.00th=[ 3064], 5.00th=[ 5276], 10.00th=[ 7635], 20.00th=[10290], 00:15:09.054 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:15:09.054 | 70.00th=[12780], 80.00th=[17433], 90.00th=[20841], 95.00th=[26346], 00:15:09.054 | 99.00th=[44827], 99.50th=[52691], 99.90th=[59507], 99.95th=[59507], 00:15:09.054 | 99.99th=[59507] 00:15:09.054 bw ( KiB/s): min=19128, max=20480, per=28.72%, avg=19804.00, stdev=956.01, samples=2 00:15:09.054 iops : min= 4782, max= 5120, avg=4951.00, stdev=239.00, samples=2 00:15:09.054 lat (msec) : 2=0.31%, 4=0.51%, 10=15.20%, 20=74.75%, 50=8.85% 00:15:09.054 lat (msec) : 100=0.39% 00:15:09.054 cpu : usr=3.47%, sys=4.16%, ctx=531, majf=0, minf=1 00:15:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.054 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.054 00:15:09.054 Run status group 0 (all jobs): 00:15:09.054 READ: bw=62.7MiB/s (65.7MB/s), 13.9MiB/s-17.8MiB/s (14.6MB/s-18.7MB/s), io=63.3MiB (66.4MB), run=1003-1010msec 00:15:09.054 WRITE: bw=67.3MiB/s (70.6MB/s), 15.8MiB/s-19.6MiB/s (16.5MB/s-20.6MB/s), io=68.0MiB (71.3MB), run=1003-1010msec 00:15:09.054 00:15:09.054 Disk stats (read/write): 00:15:09.054 nvme0n1: ios=3350/3584, merge=0/0, ticks=18068/15449, in_queue=33517, util=98.20% 00:15:09.054 nvme0n2: ios=3221/3584, merge=0/0, ticks=21791/23086, in_queue=44877, util=98.38% 00:15:09.054 nvme0n3: ios=3343/3584, merge=0/0, ticks=25615/25988, in_queue=51603, util=97.40% 00:15:09.054 nvme0n4: ios=4096/4295, merge=0/0, ticks=28858/30138, in_queue=58996, util=89.20% 00:15:09.054 03:09:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:09.054 03:09:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1026492 00:15:09.054 03:09:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:09.054 03:09:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:09.054 [global] 00:15:09.054 thread=1 00:15:09.054 invalidate=1 00:15:09.054 rw=read 00:15:09.054 time_based=1 00:15:09.054 runtime=10 00:15:09.054 ioengine=libaio 00:15:09.054 direct=1 00:15:09.054 bs=4096 00:15:09.054 iodepth=1 00:15:09.054 norandommap=1 00:15:09.054 numjobs=1 00:15:09.054 00:15:09.054 [job0] 00:15:09.054 filename=/dev/nvme0n1 00:15:09.054 [job1] 00:15:09.054 filename=/dev/nvme0n2 00:15:09.054 [job2] 00:15:09.054 filename=/dev/nvme0n3 00:15:09.054 [job3] 00:15:09.054 filename=/dev/nvme0n4 00:15:09.054 Could not set queue depth (nvme0n1) 00:15:09.054 Could not set queue depth (nvme0n2) 00:15:09.054 Could not set queue depth (nvme0n3) 00:15:09.054 Could not set queue depth (nvme0n4) 00:15:09.054 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.054 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.054 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.054 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.054 fio-3.35 00:15:09.054 Starting 4 threads 00:15:12.374 03:09:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:12.374 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:12.375 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=303104, buflen=4096 00:15:12.375 fio: pid=1026755, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.375 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.375 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:12.375 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26365952, buflen=4096 00:15:12.375 fio: pid=1026754, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.375 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18812928, buflen=4096 00:15:12.375 fio: pid=1026751, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.375 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.375 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:12.632 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.632 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:12.632 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=42479616, buflen=4096 00:15:12.632 fio: pid=1026753, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:12.632 00:15:12.632 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1026751: Wed May 15 03:09:43 2024 00:15:12.632 read: IOPS=1485, BW=5940KiB/s (6082kB/s)(17.9MiB/3093msec) 00:15:12.632 slat (usec): min=4, max=27339, avg=15.75, stdev=444.91 00:15:12.632 clat (usec): min=238, max=42033, avg=651.98, stdev=3773.34 00:15:12.632 lat (usec): min=245, max=42045, avg=667.73, stdev=3799.78 00:15:12.632 clat percentiles (usec): 00:15:12.632 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 00:15:12.632 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:15:12.632 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 00:15:12.632 | 99.00th=[ 433], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:12.632 | 99.99th=[42206] 00:15:12.632 bw ( KiB/s): min= 111, max=13104, per=27.51%, avg=7137.40, stdev=5688.91, samples=5 00:15:12.632 iops : min= 27, max= 3276, avg=1784.20, stdev=1422.46, samples=5 00:15:12.632 lat (usec) : 250=0.04%, 500=99.04%, 750=0.02% 00:15:12.632 lat (msec) : 50=0.87% 00:15:12.632 cpu : usr=0.23%, sys=1.46%, ctx=4598, majf=0, minf=1 00:15:12.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 issued rwts: total=4594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.632 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1026753: Wed May 15 03:09:43 2024 00:15:12.632 read: IOPS=3132, BW=12.2MiB/s (12.8MB/s)(40.5MiB/3311msec) 00:15:12.632 slat (usec): min=5, max=15075, avg=11.39, stdev=241.82 00:15:12.632 clat (usec): min=205, max=41238, avg=304.65, stdev=982.17 00:15:12.632 lat (usec): min=212, max=55001, avg=316.05, stdev=1085.44 00:15:12.632 clat percentiles (usec): 00:15:12.632 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 245], 00:15:12.632 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:15:12.632 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 437], 00:15:12.632 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 474], 99.95th=[40633], 00:15:12.632 | 99.99th=[41157] 00:15:12.632 bw ( KiB/s): min=11906, max=16606, per=51.70%, avg=13412.00, stdev=1669.64, samples=6 00:15:12.632 iops : min= 2976, max= 4151, avg=3352.83, stdev=417.31, samples=6 00:15:12.632 lat (usec) : 250=23.12%, 500=76.79%, 750=0.02% 00:15:12.632 lat (msec) : 50=0.06% 00:15:12.632 cpu : usr=0.63%, sys=2.90%, ctx=10376, majf=0, minf=1 00:15:12.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 issued rwts: total=10372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.632 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1026754: Wed May 15 03:09:43 2024 00:15:12.632 read: IOPS=2199, BW=8797KiB/s (9008kB/s)(25.1MiB/2927msec) 00:15:12.632 slat (nsec): min=6191, max=32091, avg=7286.49, stdev=1380.76 00:15:12.632 clat (usec): min=222, max=42027, avg=442.76, stdev=2653.92 00:15:12.632 lat (usec): min=229, max=42050, avg=450.05, stdev=2654.71 00:15:12.632 clat percentiles (usec): 00:15:12.632 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:15:12.632 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:15:12.632 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:15:12.632 | 99.00th=[ 343], 99.50th=[ 396], 99.90th=[41681], 99.95th=[42206], 00:15:12.632 | 99.99th=[42206] 00:15:12.632 bw ( KiB/s): min= 239, max=14632, per=34.66%, avg=8991.80, stdev=5539.53, samples=5 00:15:12.632 iops : min= 59, max= 3658, avg=2247.80, stdev=1385.18, samples=5 00:15:12.632 lat (usec) : 250=9.79%, 500=89.73%, 750=0.03% 00:15:12.632 lat (msec) : 20=0.02%, 50=0.42% 00:15:12.632 cpu : usr=0.68%, sys=1.88%, ctx=6438, majf=0, minf=1 00:15:12.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 issued rwts: total=6438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.632 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1026755: Wed May 15 03:09:43 2024 00:15:12.632 read: IOPS=27, BW=108KiB/s (111kB/s)(296KiB/2740msec) 00:15:12.632 slat (nsec): min=7900, max=30624, avg=14827.24, stdev=5861.27 00:15:12.632 clat (usec): min=286, max=42040, avg=36729.86, stdev=12761.77 00:15:12.632 lat (usec): min=296, max=42050, avg=36744.60, stdev=12762.34 00:15:12.632 clat percentiles (usec): 00:15:12.632 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 502], 20.00th=[40633], 00:15:12.632 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:12.632 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:15:12.632 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:12.632 | 99.99th=[42206] 00:15:12.632 bw ( KiB/s): min= 96, max= 144, per=0.42%, avg=108.80, stdev=20.08, samples=5 00:15:12.632 iops : min= 24, max= 36, avg=27.20, stdev= 5.02, samples=5 00:15:12.632 lat (usec) : 500=9.33%, 750=1.33% 00:15:12.632 lat (msec) : 50=88.00% 00:15:12.632 cpu : usr=0.07%, sys=0.00%, ctx=75, majf=0, minf=2 00:15:12.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.632 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.632 00:15:12.632 Run status group 0 (all jobs): 00:15:12.632 READ: bw=25.3MiB/s (26.6MB/s), 108KiB/s-12.2MiB/s (111kB/s-12.8MB/s), io=83.9MiB (88.0MB), run=2740-3311msec 00:15:12.632 00:15:12.632 Disk stats (read/write): 00:15:12.632 nvme0n1: ios=4472/0, merge=0/0, ticks=2767/0, in_queue=2767, util=95.39% 00:15:12.632 nvme0n2: ios=10367/0, merge=0/0, ticks=2944/0, in_queue=2944, util=95.30% 00:15:12.632 nvme0n3: ios=6435/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.52% 00:15:12.632 nvme0n4: ios=71/0, merge=0/0, ticks=2596/0, in_queue=2596, util=96.45% 00:15:12.888 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.888 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:12.888 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:12.888 03:09:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:13.145 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.145 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:13.402 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.402 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1026492 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:13.659 nvmf hotplug test: fio failed as expected 00:15:13.659 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.916 rmmod nvme_tcp 00:15:13.916 rmmod nvme_fabrics 00:15:13.916 rmmod nvme_keyring 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1023691 ']' 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1023691 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1023691 ']' 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1023691 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.916 03:09:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1023691 00:15:13.916 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:13.916 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:13.916 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1023691' 00:15:13.916 killing process with pid 1023691 00:15:13.916 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1023691 00:15:13.916 [2024-05-15 03:09:45.023580] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:13.916 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1023691 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.174 03:09:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.702 03:09:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.703 00:15:16.703 real 0m25.903s 00:15:16.703 user 1m46.257s 00:15:16.703 sys 0m7.591s 00:15:16.703 03:09:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.703 03:09:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.703 ************************************ 00:15:16.703 END TEST nvmf_fio_target 00:15:16.703 ************************************ 00:15:16.703 03:09:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:16.703 03:09:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:16.703 03:09:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.703 03:09:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.703 ************************************ 00:15:16.703 START TEST nvmf_bdevio 00:15:16.703 ************************************ 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:16.703 * Looking for test storage... 00:15:16.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.703 03:09:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:21.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:21.964 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:21.964 Found net devices under 0000:86:00.0: cvl_0_0 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:21.964 Found net devices under 0000:86:00.1: cvl_0_1 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.964 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.965 03:09:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:15:21.965 00:15:21.965 --- 10.0.0.2 ping statistics --- 00:15:21.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.965 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:15:21.965 00:15:21.965 --- 10.0.0.1 ping statistics --- 00:15:21.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.965 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1030992 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1030992 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1030992 ']' 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.965 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.223 [2024-05-15 03:09:53.141353] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:22.223 [2024-05-15 03:09:53.141394] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.223 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.223 [2024-05-15 03:09:53.199478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.223 [2024-05-15 03:09:53.270170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.223 [2024-05-15 03:09:53.270209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.223 [2024-05-15 03:09:53.270216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.223 [2024-05-15 03:09:53.270222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.223 [2024-05-15 03:09:53.270227] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.223 [2024-05-15 03:09:53.270344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:22.223 [2024-05-15 03:09:53.270890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:22.223 [2024-05-15 03:09:53.270980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.223 [2024-05-15 03:09:53.270981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:22.801 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:22.801 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:15:22.801 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.801 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.801 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 [2024-05-15 03:09:53.991309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 03:09:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 Malloc0 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.058 [2024-05-15 03:09:54.034532] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:23.058 [2024-05-15 03:09:54.034785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.058 { 00:15:23.058 "params": { 00:15:23.058 "name": "Nvme$subsystem", 00:15:23.058 "trtype": "$TEST_TRANSPORT", 00:15:23.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.058 "adrfam": "ipv4", 00:15:23.058 "trsvcid": "$NVMF_PORT", 00:15:23.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.058 "hdgst": ${hdgst:-false}, 00:15:23.058 "ddgst": ${ddgst:-false} 00:15:23.058 }, 00:15:23.058 "method": "bdev_nvme_attach_controller" 00:15:23.058 } 00:15:23.058 EOF 00:15:23.058 )") 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:23.058 03:09:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.058 "params": { 00:15:23.058 "name": "Nvme1", 00:15:23.058 "trtype": "tcp", 00:15:23.058 "traddr": "10.0.0.2", 00:15:23.058 "adrfam": "ipv4", 00:15:23.058 "trsvcid": "4420", 00:15:23.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.058 "hdgst": false, 00:15:23.058 "ddgst": false 00:15:23.058 }, 00:15:23.058 "method": "bdev_nvme_attach_controller" 00:15:23.058 }' 00:15:23.058 [2024-05-15 03:09:54.081076] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:23.058 [2024-05-15 03:09:54.081124] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031242 ] 00:15:23.058 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.058 [2024-05-15 03:09:54.134895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.058 [2024-05-15 03:09:54.209753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.058 [2024-05-15 03:09:54.209850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.058 [2024-05-15 03:09:54.209850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.622 I/O targets: 00:15:23.622 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:23.622 00:15:23.622 00:15:23.622 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.622 http://cunit.sourceforge.net/ 00:15:23.622 00:15:23.622 00:15:23.622 Suite: bdevio tests on: Nvme1n1 00:15:23.622 Test: blockdev write read block ...passed 00:15:23.622 Test: blockdev write zeroes read block ...passed 00:15:23.622 Test: blockdev write zeroes read no split ...passed 00:15:23.622 Test: blockdev write zeroes read split ...passed 00:15:23.622 Test: blockdev write zeroes read split partial ...passed 00:15:23.622 Test: blockdev reset ...[2024-05-15 03:09:54.768261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.622 [2024-05-15 03:09:54.768323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7a7f0 (9): Bad file descriptor 00:15:23.879 [2024-05-15 03:09:54.787152] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.879 passed 00:15:23.879 Test: blockdev write read 8 blocks ...passed 00:15:23.879 Test: blockdev write read size > 128k ...passed 00:15:23.879 Test: blockdev write read invalid size ...passed 00:15:23.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.879 Test: blockdev write read max offset ...passed 00:15:23.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.879 Test: blockdev writev readv 8 blocks ...passed 00:15:23.879 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.879 Test: blockdev writev readv block ...passed 00:15:23.879 Test: blockdev writev readv size > 128k ...passed 00:15:24.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:24.136 Test: blockdev comparev and writev ...[2024-05-15 03:09:55.042683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.042724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.042731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.043666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.136 [2024-05-15 03:09:55.043674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:24.136 passed 00:15:24.136 Test: blockdev nvme passthru rw ...passed 00:15:24.136 Test: blockdev nvme passthru vendor specific ...[2024-05-15 03:09:55.125799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.136 [2024-05-15 03:09:55.125815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.125942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.136 [2024-05-15 03:09:55.125952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.126079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.136 [2024-05-15 03:09:55.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:24.136 [2024-05-15 03:09:55.126213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.136 [2024-05-15 03:09:55.126226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:24.136 passed 00:15:24.136 Test: blockdev nvme admin passthru ...passed 00:15:24.136 Test: blockdev copy ...passed 00:15:24.136 00:15:24.136 Run Summary: Type Total Ran Passed Failed Inactive 00:15:24.136 suites 1 1 n/a 0 0 00:15:24.136 tests 23 23 23 0 0 00:15:24.136 asserts 152 152 152 0 n/a 00:15:24.136 00:15:24.136 Elapsed time = 1.220 seconds 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.394 rmmod nvme_tcp 00:15:24.394 rmmod nvme_fabrics 00:15:24.394 rmmod nvme_keyring 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1030992 ']' 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1030992 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1030992 ']' 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1030992 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1030992 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1030992' 00:15:24.394 killing process with pid 1030992 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1030992 00:15:24.394 [2024-05-15 03:09:55.488665] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:24.394 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1030992 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.652 03:09:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.183 03:09:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.184 00:15:27.184 real 0m10.402s 00:15:27.184 user 0m13.922s 00:15:27.184 sys 0m4.738s 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.184 ************************************ 00:15:27.184 END TEST nvmf_bdevio 00:15:27.184 ************************************ 00:15:27.184 03:09:57 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:15:27.184 03:09:57 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:27.184 03:09:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:27.184 03:09:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.184 03:09:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.184 ************************************ 00:15:27.184 START TEST nvmf_bdevio_no_huge 00:15:27.184 ************************************ 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:27.184 * Looking for test storage... 00:15:27.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.184 03:09:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.448 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.448 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.448 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.449 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:15:32.449 00:15:32.449 --- 10.0.0.2 ping statistics --- 00:15:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.449 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:15:32.449 00:15:32.449 --- 10.0.0.1 ping statistics --- 00:15:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.449 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1034885 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1034885 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1034885 ']' 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.449 03:10:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 [2024-05-15 03:10:03.502268] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:32.449 [2024-05-15 03:10:03.502314] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:32.449 [2024-05-15 03:10:03.564073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.706 [2024-05-15 03:10:03.647487] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.706 [2024-05-15 03:10:03.647525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.706 [2024-05-15 03:10:03.647532] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.706 [2024-05-15 03:10:03.647537] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.706 [2024-05-15 03:10:03.647544] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.706 [2024-05-15 03:10:03.647666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:32.706 [2024-05-15 03:10:03.647780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:32.706 [2024-05-15 03:10:03.647888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.706 [2024-05-15 03:10:03.647890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.268 [2024-05-15 03:10:04.347795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:33.268 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.269 Malloc0 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.269 [2024-05-15 03:10:04.391886] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:33.269 [2024-05-15 03:10:04.392109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.269 { 00:15:33.269 "params": { 00:15:33.269 "name": "Nvme$subsystem", 00:15:33.269 "trtype": "$TEST_TRANSPORT", 00:15:33.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.269 "adrfam": "ipv4", 00:15:33.269 "trsvcid": "$NVMF_PORT", 00:15:33.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.269 "hdgst": ${hdgst:-false}, 00:15:33.269 "ddgst": ${ddgst:-false} 00:15:33.269 }, 00:15:33.269 "method": "bdev_nvme_attach_controller" 00:15:33.269 } 00:15:33.269 EOF 00:15:33.269 )") 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:33.269 03:10:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.269 "params": { 00:15:33.269 "name": "Nvme1", 00:15:33.269 "trtype": "tcp", 00:15:33.269 "traddr": "10.0.0.2", 00:15:33.269 "adrfam": "ipv4", 00:15:33.269 "trsvcid": "4420", 00:15:33.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.269 "hdgst": false, 00:15:33.269 "ddgst": false 00:15:33.269 }, 00:15:33.269 "method": "bdev_nvme_attach_controller" 00:15:33.269 }' 00:15:33.269 [2024-05-15 03:10:04.427691] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:33.269 [2024-05-15 03:10:04.427739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1035020 ] 00:15:33.525 [2024-05-15 03:10:04.484368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.525 [2024-05-15 03:10:04.572746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.525 [2024-05-15 03:10:04.572764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.525 [2024-05-15 03:10:04.572766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.780 I/O targets: 00:15:33.780 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:33.780 00:15:33.780 00:15:33.780 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.780 http://cunit.sourceforge.net/ 00:15:33.780 00:15:33.780 00:15:33.780 Suite: bdevio tests on: Nvme1n1 00:15:33.780 Test: blockdev write read block ...passed 00:15:33.780 Test: blockdev write zeroes read block ...passed 00:15:33.781 Test: blockdev write zeroes read no split ...passed 00:15:33.781 Test: blockdev write zeroes read split ...passed 00:15:34.037 Test: blockdev write zeroes read split partial ...passed 00:15:34.037 Test: blockdev reset ...[2024-05-15 03:10:04.963816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:34.037 [2024-05-15 03:10:04.963878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11355a0 (9): Bad file descriptor 00:15:34.037 [2024-05-15 03:10:05.020277] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:34.037 passed 00:15:34.037 Test: blockdev write read 8 blocks ...passed 00:15:34.037 Test: blockdev write read size > 128k ...passed 00:15:34.037 Test: blockdev write read invalid size ...passed 00:15:34.037 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.037 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.037 Test: blockdev write read max offset ...passed 00:15:34.037 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.037 Test: blockdev writev readv 8 blocks ...passed 00:15:34.037 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.294 Test: blockdev writev readv block ...passed 00:15:34.294 Test: blockdev writev readv size > 128k ...passed 00:15:34.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.294 Test: blockdev comparev and writev ...[2024-05-15 03:10:05.236451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.236499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.236507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.236778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.236802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.236810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.237070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.237079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.237090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.237097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.237346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.237366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.294 [2024-05-15 03:10:05.237373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:34.294 passed 00:15:34.294 Test: blockdev nvme passthru rw ...passed 00:15:34.294 Test: blockdev nvme passthru vendor specific ...[2024-05-15 03:10:05.320842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.294 [2024-05-15 03:10:05.320857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.320984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.294 [2024-05-15 03:10:05.320994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.321117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.294 [2024-05-15 03:10:05.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:34.294 [2024-05-15 03:10:05.321246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.294 [2024-05-15 03:10:05.321255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:34.294 passed 00:15:34.294 Test: blockdev nvme admin passthru ...passed 00:15:34.294 Test: blockdev copy ...passed 00:15:34.294 00:15:34.294 Run Summary: Type Total Ran Passed Failed Inactive 00:15:34.294 suites 1 1 n/a 0 0 00:15:34.294 tests 23 23 23 0 0 00:15:34.294 asserts 152 152 152 0 n/a 00:15:34.294 00:15:34.294 Elapsed time = 1.232 seconds 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.550 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.550 rmmod nvme_tcp 00:15:34.550 rmmod nvme_fabrics 00:15:34.550 rmmod nvme_keyring 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1034885 ']' 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1034885 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1034885 ']' 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1034885 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1034885 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1034885' 00:15:34.808 killing process with pid 1034885 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1034885 00:15:34.808 [2024-05-15 03:10:05.775436] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:34.808 03:10:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1034885 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.066 03:10:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.596 03:10:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.596 00:15:37.596 real 0m10.321s 00:15:37.596 user 0m13.284s 00:15:37.596 sys 0m4.966s 00:15:37.596 03:10:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:37.596 03:10:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:37.596 ************************************ 00:15:37.596 END TEST nvmf_bdevio_no_huge 00:15:37.596 ************************************ 00:15:37.596 03:10:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:37.596 03:10:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:37.596 03:10:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:37.596 03:10:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.596 ************************************ 00:15:37.596 START TEST nvmf_tls 00:15:37.596 ************************************ 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:37.596 * Looking for test storage... 00:15:37.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.596 03:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:15:42.914 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:42.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:42.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:42.915 Found net devices under 0000:86:00.0: cvl_0_0 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:42.915 Found net devices under 0000:86:00.1: cvl_0_1 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:15:42.915 00:15:42.915 --- 10.0.0.2 ping statistics --- 00:15:42.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.915 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:15:42.915 00:15:42.915 --- 10.0.0.1 ping statistics --- 00:15:42.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.915 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1038762 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1038762 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1038762 ']' 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:42.915 03:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.915 [2024-05-15 03:10:13.612306] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:42.915 [2024-05-15 03:10:13.612352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.915 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.915 [2024-05-15 03:10:13.670899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.915 [2024-05-15 03:10:13.749222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.915 [2024-05-15 03:10:13.749255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.915 [2024-05-15 03:10:13.749262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.915 [2024-05-15 03:10:13.749268] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.915 [2024-05-15 03:10:13.749273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.915 [2024-05-15 03:10:13.749290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:43.481 true 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:43.481 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:43.740 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:43.740 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:43.740 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:43.998 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:43.998 03:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:43.998 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:43.998 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:43.998 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:44.256 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:44.256 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:44.514 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:44.772 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:44.772 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:45.030 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:45.030 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:45.030 03:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:45.030 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:45.030 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.YetHNHDjPI 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.o64wS4KXGV 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.YetHNHDjPI 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.o64wS4KXGV 00:15:45.288 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:45.545 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:15:45.803 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.YetHNHDjPI 00:15:45.803 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YetHNHDjPI 00:15:45.803 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:46.060 [2024-05-15 03:10:16.978439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.060 03:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:46.060 03:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:46.318 [2024-05-15 03:10:17.315284] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:46.318 [2024-05-15 03:10:17.315350] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:46.318 [2024-05-15 03:10:17.315548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.318 03:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:46.575 malloc0 00:15:46.575 03:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:46.575 03:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YetHNHDjPI 00:15:46.833 [2024-05-15 03:10:17.824766] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:46.833 03:10:17 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YetHNHDjPI 00:15:46.833 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.793 Initializing NVMe Controllers 00:15:56.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:56.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:56.793 Initialization complete. Launching workers. 00:15:56.793 ======================================================== 00:15:56.793 Latency(us) 00:15:56.793 Device Information : IOPS MiB/s Average min max 00:15:56.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16600.22 64.84 3855.77 855.25 4551.03 00:15:56.793 ======================================================== 00:15:56.793 Total : 16600.22 64.84 3855.77 855.25 4551.03 00:15:56.793 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YetHNHDjPI 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YetHNHDjPI' 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1041115 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1041115 /var/tmp/bdevperf.sock 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1041115 ']' 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.793 03:10:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.051 [2024-05-15 03:10:27.989282] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:15:57.051 [2024-05-15 03:10:27.989329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041115 ] 00:15:57.051 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.051 [2024-05-15 03:10:28.038448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.051 [2024-05-15 03:10:28.115512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.985 03:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.985 03:10:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:57.985 03:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YetHNHDjPI 00:15:57.985 [2024-05-15 03:10:28.944995] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:57.985 [2024-05-15 03:10:28.945061] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:57.985 TLSTESTn1 00:15:57.985 03:10:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:57.985 Running I/O for 10 seconds... 00:16:10.182 00:16:10.182 Latency(us) 00:16:10.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.182 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:10.182 Verification LBA range: start 0x0 length 0x2000 00:16:10.182 TLSTESTn1 : 10.02 5550.30 21.68 0.00 0.00 23022.41 5128.90 48325.68 00:16:10.182 =================================================================================================================== 00:16:10.182 Total : 5550.30 21.68 0.00 0.00 23022.41 5128.90 48325.68 00:16:10.182 0 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1041115 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1041115 ']' 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1041115 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1041115 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1041115' 00:16:10.182 killing process with pid 1041115 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1041115 00:16:10.182 Received shutdown signal, test time was about 10.000000 seconds 00:16:10.182 00:16:10.182 Latency(us) 00:16:10.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.182 =================================================================================================================== 00:16:10.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.182 [2024-05-15 03:10:39.236032] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1041115 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o64wS4KXGV 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o64wS4KXGV 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o64wS4KXGV 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o64wS4KXGV' 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042954 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042954 /var/tmp/bdevperf.sock 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1042954 ']' 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.182 [2024-05-15 03:10:39.490275] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:10.182 [2024-05-15 03:10:39.490324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042954 ] 00:16:10.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.182 [2024-05-15 03:10:39.541402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.182 [2024-05-15 03:10:39.609410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.182 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o64wS4KXGV 00:16:10.183 [2024-05-15 03:10:39.859099] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.183 [2024-05-15 03:10:39.859177] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:10.183 [2024-05-15 03:10:39.864741] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:10.183 [2024-05-15 03:10:39.865445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee490 (107): Transport endpoint is not connected 00:16:10.183 [2024-05-15 03:10:39.866438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee490 (9): Bad file descriptor 00:16:10.183 [2024-05-15 03:10:39.867439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:10.183 [2024-05-15 03:10:39.867449] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:10.183 [2024-05-15 03:10:39.867458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:10.183 request: 00:16:10.183 { 00:16:10.183 "name": "TLSTEST", 00:16:10.183 "trtype": "tcp", 00:16:10.183 "traddr": "10.0.0.2", 00:16:10.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.183 "adrfam": "ipv4", 00:16:10.183 "trsvcid": "4420", 00:16:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.183 "psk": "/tmp/tmp.o64wS4KXGV", 00:16:10.183 "method": "bdev_nvme_attach_controller", 00:16:10.183 "req_id": 1 00:16:10.183 } 00:16:10.183 Got JSON-RPC error response 00:16:10.183 response: 00:16:10.183 { 00:16:10.183 "code": -32602, 00:16:10.183 "message": "Invalid parameters" 00:16:10.183 } 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1042954 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1042954 ']' 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1042954 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1042954 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1042954' 00:16:10.183 killing process with pid 1042954 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1042954 00:16:10.183 Received shutdown signal, test time was about 10.000000 seconds 00:16:10.183 00:16:10.183 Latency(us) 00:16:10.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.183 =================================================================================================================== 00:16:10.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:10.183 [2024-05-15 03:10:39.933479] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:10.183 03:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1042954 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YetHNHDjPI 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YetHNHDjPI 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YetHNHDjPI 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YetHNHDjPI' 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043186 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043186 /var/tmp/bdevperf.sock 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1043186 ']' 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.183 [2024-05-15 03:10:40.187114] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:10.183 [2024-05-15 03:10:40.187163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043186 ] 00:16:10.183 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.183 [2024-05-15 03:10:40.238203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.183 [2024-05-15 03:10:40.306261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:10.183 03:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.YetHNHDjPI 00:16:10.183 [2024-05-15 03:10:41.156340] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.183 [2024-05-15 03:10:41.156417] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:10.183 [2024-05-15 03:10:41.167978] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:10.183 [2024-05-15 03:10:41.168006] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:10.183 [2024-05-15 03:10:41.168030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:10.183 [2024-05-15 03:10:41.168677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d86490 (107): Transport endpoint is not connected 00:16:10.183 [2024-05-15 03:10:41.169670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d86490 (9): Bad file descriptor 00:16:10.183 [2024-05-15 03:10:41.170671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:10.183 [2024-05-15 03:10:41.170680] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:10.183 [2024-05-15 03:10:41.170691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:10.183 request: 00:16:10.183 { 00:16:10.183 "name": "TLSTEST", 00:16:10.183 "trtype": "tcp", 00:16:10.183 "traddr": "10.0.0.2", 00:16:10.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:10.183 "adrfam": "ipv4", 00:16:10.183 "trsvcid": "4420", 00:16:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.183 "psk": "/tmp/tmp.YetHNHDjPI", 00:16:10.183 "method": "bdev_nvme_attach_controller", 00:16:10.183 "req_id": 1 00:16:10.183 } 00:16:10.183 Got JSON-RPC error response 00:16:10.183 response: 00:16:10.183 { 00:16:10.183 "code": -32602, 00:16:10.183 "message": "Invalid parameters" 00:16:10.183 } 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1043186 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1043186 ']' 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1043186 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1043186 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1043186' 00:16:10.183 killing process with pid 1043186 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1043186 00:16:10.183 Received shutdown signal, test time was about 10.000000 seconds 00:16:10.183 00:16:10.183 Latency(us) 00:16:10.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.183 =================================================================================================================== 00:16:10.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:10.183 [2024-05-15 03:10:41.235104] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:10.183 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1043186 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YetHNHDjPI 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YetHNHDjPI 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YetHNHDjPI 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:10.441 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YetHNHDjPI' 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043421 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043421 /var/tmp/bdevperf.sock 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1043421 ']' 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.442 03:10:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 [2024-05-15 03:10:41.483042] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:10.442 [2024-05-15 03:10:41.483089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043421 ] 00:16:10.442 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.442 [2024-05-15 03:10:41.532233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.442 [2024-05-15 03:10:41.598939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YetHNHDjPI 00:16:11.374 [2024-05-15 03:10:42.429528] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.374 [2024-05-15 03:10:42.429602] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:11.374 [2024-05-15 03:10:42.441256] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.374 [2024-05-15 03:10:42.441277] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.374 [2024-05-15 03:10:42.441302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:11.374 [2024-05-15 03:10:42.441856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d2490 (107): Transport endpoint is not connected 00:16:11.374 [2024-05-15 03:10:42.442848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d2490 (9): Bad file descriptor 00:16:11.374 [2024-05-15 03:10:42.443849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:11.374 [2024-05-15 03:10:42.443859] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:11.374 [2024-05-15 03:10:42.443868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:11.374 request: 00:16:11.374 { 00:16:11.374 "name": "TLSTEST", 00:16:11.374 "trtype": "tcp", 00:16:11.374 "traddr": "10.0.0.2", 00:16:11.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.374 "adrfam": "ipv4", 00:16:11.374 "trsvcid": "4420", 00:16:11.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:11.374 "psk": "/tmp/tmp.YetHNHDjPI", 00:16:11.374 "method": "bdev_nvme_attach_controller", 00:16:11.374 "req_id": 1 00:16:11.374 } 00:16:11.374 Got JSON-RPC error response 00:16:11.374 response: 00:16:11.374 { 00:16:11.374 "code": -32602, 00:16:11.374 "message": "Invalid parameters" 00:16:11.374 } 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1043421 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1043421 ']' 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1043421 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1043421 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1043421' 00:16:11.374 killing process with pid 1043421 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1043421 00:16:11.374 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.374 00:16:11.374 Latency(us) 00:16:11.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.374 =================================================================================================================== 00:16:11.374 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.374 [2024-05-15 03:10:42.509935] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:11.374 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1043421 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043667 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043667 /var/tmp/bdevperf.sock 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1043667 ']' 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:11.632 03:10:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.632 [2024-05-15 03:10:42.759203] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:11.632 [2024-05-15 03:10:42.759250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043667 ] 00:16:11.632 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.890 [2024-05-15 03:10:42.810182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.890 [2024-05-15 03:10:42.878121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.453 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:12.453 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:12.453 03:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:12.710 [2024-05-15 03:10:43.724513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:12.710 [2024-05-15 03:10:43.726086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b08b30 (9): Bad file descriptor 00:16:12.710 [2024-05-15 03:10:43.727084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:12.710 [2024-05-15 03:10:43.727094] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:12.710 [2024-05-15 03:10:43.727102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.710 request: 00:16:12.710 { 00:16:12.710 "name": "TLSTEST", 00:16:12.710 "trtype": "tcp", 00:16:12.710 "traddr": "10.0.0.2", 00:16:12.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.710 "adrfam": "ipv4", 00:16:12.710 "trsvcid": "4420", 00:16:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.710 "method": "bdev_nvme_attach_controller", 00:16:12.710 "req_id": 1 00:16:12.710 } 00:16:12.710 Got JSON-RPC error response 00:16:12.710 response: 00:16:12.710 { 00:16:12.710 "code": -32602, 00:16:12.710 "message": "Invalid parameters" 00:16:12.710 } 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1043667 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1043667 ']' 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1043667 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1043667 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1043667' 00:16:12.710 killing process with pid 1043667 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1043667 00:16:12.710 Received shutdown signal, test time was about 10.000000 seconds 00:16:12.710 00:16:12.710 Latency(us) 00:16:12.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.710 =================================================================================================================== 00:16:12.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.710 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1043667 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1038762 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1038762 ']' 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1038762 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.967 03:10:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1038762 00:16:12.967 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:12.967 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:12.967 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1038762' 00:16:12.967 killing process with pid 1038762 00:16:12.967 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1038762 00:16:12.967 [2024-05-15 03:10:44.036986] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:12.967 [2024-05-15 03:10:44.037016] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:12.967 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1038762 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.zs7qpKdYN5 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.zs7qpKdYN5 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1043913 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1043913 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1043913 ']' 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.224 03:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.224 [2024-05-15 03:10:44.356931] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:13.225 [2024-05-15 03:10:44.356980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.225 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.481 [2024-05-15 03:10:44.414130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.481 [2024-05-15 03:10:44.481363] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.481 [2024-05-15 03:10:44.481403] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.481 [2024-05-15 03:10:44.481410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.481 [2024-05-15 03:10:44.481417] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.481 [2024-05-15 03:10:44.481422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.481 [2024-05-15 03:10:44.481440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zs7qpKdYN5 00:16:14.045 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:14.303 [2024-05-15 03:10:45.344469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.303 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:14.560 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:14.560 [2024-05-15 03:10:45.685314] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:14.560 [2024-05-15 03:10:45.685382] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:14.560 [2024-05-15 03:10:45.685550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.560 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:14.817 malloc0 00:16:14.817 03:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:15.074 [2024-05-15 03:10:46.178618] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zs7qpKdYN5 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zs7qpKdYN5' 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1044177 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1044177 /var/tmp/bdevperf.sock 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1044177 ']' 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:15.074 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.074 [2024-05-15 03:10:46.225538] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:15.074 [2024-05-15 03:10:46.225583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044177 ] 00:16:15.399 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.399 [2024-05-15 03:10:46.275368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.399 [2024-05-15 03:10:46.348170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.399 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:15.399 03:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:15.399 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:15.671 [2024-05-15 03:10:46.589119] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:15.671 [2024-05-15 03:10:46.589197] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:15.671 TLSTESTn1 00:16:15.671 03:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.671 Running I/O for 10 seconds... 00:16:27.860 00:16:27.860 Latency(us) 00:16:27.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.860 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:27.860 Verification LBA range: start 0x0 length 0x2000 00:16:27.860 TLSTESTn1 : 10.02 5499.67 21.48 0.00 0.00 23230.80 7123.48 27696.08 00:16:27.860 =================================================================================================================== 00:16:27.860 Total : 5499.67 21.48 0.00 0.00 23230.80 7123.48 27696.08 00:16:27.860 0 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1044177 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1044177 ']' 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1044177 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1044177 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1044177' 00:16:27.860 killing process with pid 1044177 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1044177 00:16:27.860 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.860 00:16:27.860 Latency(us) 00:16:27.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.860 =================================================================================================================== 00:16:27.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.860 [2024-05-15 03:10:56.872790] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:27.860 03:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1044177 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.zs7qpKdYN5 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zs7qpKdYN5 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zs7qpKdYN5 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zs7qpKdYN5 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zs7qpKdYN5' 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1046011 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1046011 /var/tmp/bdevperf.sock 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1046011 ']' 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.860 [2024-05-15 03:10:57.132273] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:27.860 [2024-05-15 03:10:57.132322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046011 ] 00:16:27.860 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.860 [2024-05-15 03:10:57.181958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.860 [2024-05-15 03:10:57.248946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:27.860 03:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:27.860 [2024-05-15 03:10:58.091314] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.860 [2024-05-15 03:10:58.091362] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:27.860 [2024-05-15 03:10:58.091369] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.zs7qpKdYN5 00:16:27.861 request: 00:16:27.861 { 00:16:27.861 "name": "TLSTEST", 00:16:27.861 "trtype": "tcp", 00:16:27.861 "traddr": "10.0.0.2", 00:16:27.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.861 "adrfam": "ipv4", 00:16:27.861 "trsvcid": "4420", 00:16:27.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.861 "psk": "/tmp/tmp.zs7qpKdYN5", 00:16:27.861 "method": "bdev_nvme_attach_controller", 00:16:27.861 "req_id": 1 00:16:27.861 } 00:16:27.861 Got JSON-RPC error response 00:16:27.861 response: 00:16:27.861 { 00:16:27.861 "code": -1, 00:16:27.861 "message": "Operation not permitted" 00:16:27.861 } 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1046011 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1046011 ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1046011 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1046011 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1046011' 00:16:27.861 killing process with pid 1046011 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1046011 00:16:27.861 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.861 00:16:27.861 Latency(us) 00:16:27.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.861 =================================================================================================================== 00:16:27.861 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1046011 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1043913 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1043913 ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1043913 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1043913 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1043913' 00:16:27.861 killing process with pid 1043913 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1043913 00:16:27.861 [2024-05-15 03:10:58.402473] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:27.861 [2024-05-15 03:10:58.402513] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1043913 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1046257 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1046257 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1046257 ']' 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.861 03:10:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.861 [2024-05-15 03:10:58.671712] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:27.861 [2024-05-15 03:10:58.671758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.861 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.861 [2024-05-15 03:10:58.728350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.861 [2024-05-15 03:10:58.810329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.861 [2024-05-15 03:10:58.810368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.861 [2024-05-15 03:10:58.810375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.861 [2024-05-15 03:10:58.810381] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.861 [2024-05-15 03:10:58.810386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.861 [2024-05-15 03:10:58.810408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zs7qpKdYN5 00:16:28.428 03:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:28.686 [2024-05-15 03:10:59.653256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.686 03:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:28.686 03:10:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:28.944 [2024-05-15 03:10:59.990089] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:28.944 [2024-05-15 03:10:59.990151] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:28.944 [2024-05-15 03:10:59.990316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.944 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:29.202 malloc0 00:16:29.202 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:29.202 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:29.459 [2024-05-15 03:11:00.511733] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:29.459 [2024-05-15 03:11:00.511761] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:29.459 [2024-05-15 03:11:00.511785] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:29.459 request: 00:16:29.459 { 00:16:29.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.459 "host": "nqn.2016-06.io.spdk:host1", 00:16:29.459 "psk": "/tmp/tmp.zs7qpKdYN5", 00:16:29.459 "method": "nvmf_subsystem_add_host", 00:16:29.459 "req_id": 1 00:16:29.459 } 00:16:29.459 Got JSON-RPC error response 00:16:29.459 response: 00:16:29.459 { 00:16:29.459 "code": -32603, 00:16:29.459 "message": "Internal error" 00:16:29.459 } 00:16:29.459 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:29.459 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.459 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.459 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.459 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1046257 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1046257 ']' 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1046257 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1046257 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1046257' 00:16:29.460 killing process with pid 1046257 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1046257 00:16:29.460 [2024-05-15 03:11:00.578200] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.460 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1046257 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.zs7qpKdYN5 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1046741 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1046741 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1046741 ']' 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:29.718 03:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.718 [2024-05-15 03:11:00.852707] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:29.718 [2024-05-15 03:11:00.852757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.718 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.975 [2024-05-15 03:11:00.910337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.975 [2024-05-15 03:11:00.977496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.975 [2024-05-15 03:11:00.977537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.975 [2024-05-15 03:11:00.977544] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.975 [2024-05-15 03:11:00.977550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.975 [2024-05-15 03:11:00.977555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.975 [2024-05-15 03:11:00.977573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zs7qpKdYN5 00:16:30.540 03:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:30.797 [2024-05-15 03:11:01.840697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.797 03:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:31.055 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:31.055 [2024-05-15 03:11:02.185549] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:31.055 [2024-05-15 03:11:02.185621] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:31.055 [2024-05-15 03:11:02.185769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.055 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:31.312 malloc0 00:16:31.312 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:31.569 [2024-05-15 03:11:02.682858] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1046999 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1046999 /var/tmp/bdevperf.sock 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1046999 ']' 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.569 03:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.827 [2024-05-15 03:11:02.744690] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:31.827 [2024-05-15 03:11:02.744736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046999 ] 00:16:31.827 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.827 [2024-05-15 03:11:02.793962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.827 [2024-05-15 03:11:02.866236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.392 03:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:32.392 03:11:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:32.649 03:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:32.649 [2024-05-15 03:11:03.708686] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.649 [2024-05-15 03:11:03.708767] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:32.649 TLSTESTn1 00:16:32.649 03:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:16:32.907 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:32.907 "subsystems": [ 00:16:32.907 { 00:16:32.907 "subsystem": "keyring", 00:16:32.907 "config": [] 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "subsystem": "iobuf", 00:16:32.907 "config": [ 00:16:32.907 { 00:16:32.907 "method": "iobuf_set_options", 00:16:32.907 "params": { 00:16:32.907 "small_pool_count": 8192, 00:16:32.907 "large_pool_count": 1024, 00:16:32.907 "small_bufsize": 8192, 00:16:32.907 "large_bufsize": 135168 00:16:32.907 } 00:16:32.907 } 00:16:32.907 ] 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "subsystem": "sock", 00:16:32.907 "config": [ 00:16:32.907 { 00:16:32.907 "method": "sock_impl_set_options", 00:16:32.907 "params": { 00:16:32.907 "impl_name": "posix", 00:16:32.907 "recv_buf_size": 2097152, 00:16:32.907 "send_buf_size": 2097152, 00:16:32.907 "enable_recv_pipe": true, 00:16:32.907 "enable_quickack": false, 00:16:32.907 "enable_placement_id": 0, 00:16:32.907 "enable_zerocopy_send_server": true, 00:16:32.907 "enable_zerocopy_send_client": false, 00:16:32.907 "zerocopy_threshold": 0, 00:16:32.907 "tls_version": 0, 00:16:32.907 "enable_ktls": false 00:16:32.907 } 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "method": "sock_impl_set_options", 00:16:32.907 "params": { 00:16:32.907 "impl_name": "ssl", 00:16:32.907 "recv_buf_size": 4096, 00:16:32.907 "send_buf_size": 4096, 00:16:32.907 "enable_recv_pipe": true, 00:16:32.907 "enable_quickack": false, 00:16:32.907 "enable_placement_id": 0, 00:16:32.907 "enable_zerocopy_send_server": true, 00:16:32.907 "enable_zerocopy_send_client": false, 00:16:32.907 "zerocopy_threshold": 0, 00:16:32.907 "tls_version": 0, 00:16:32.907 "enable_ktls": false 00:16:32.907 } 00:16:32.907 } 00:16:32.907 ] 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "subsystem": "vmd", 00:16:32.907 "config": [] 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "subsystem": "accel", 00:16:32.907 "config": [ 00:16:32.907 { 00:16:32.907 "method": "accel_set_options", 00:16:32.907 "params": { 00:16:32.907 "small_cache_size": 128, 00:16:32.907 "large_cache_size": 16, 00:16:32.907 "task_count": 2048, 00:16:32.907 "sequence_count": 2048, 00:16:32.907 "buf_count": 2048 00:16:32.907 } 00:16:32.907 } 00:16:32.907 ] 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "subsystem": "bdev", 00:16:32.907 "config": [ 00:16:32.907 { 00:16:32.907 "method": "bdev_set_options", 00:16:32.907 "params": { 00:16:32.907 "bdev_io_pool_size": 65535, 00:16:32.907 "bdev_io_cache_size": 256, 00:16:32.907 "bdev_auto_examine": true, 00:16:32.907 "iobuf_small_cache_size": 128, 00:16:32.907 "iobuf_large_cache_size": 16 00:16:32.907 } 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "method": "bdev_raid_set_options", 00:16:32.907 "params": { 00:16:32.907 "process_window_size_kb": 1024 00:16:32.907 } 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "method": "bdev_iscsi_set_options", 00:16:32.907 "params": { 00:16:32.907 "timeout_sec": 30 00:16:32.907 } 00:16:32.907 }, 00:16:32.907 { 00:16:32.907 "method": "bdev_nvme_set_options", 00:16:32.907 "params": { 00:16:32.907 "action_on_timeout": "none", 00:16:32.907 "timeout_us": 0, 00:16:32.907 "timeout_admin_us": 0, 00:16:32.907 "keep_alive_timeout_ms": 10000, 00:16:32.907 "arbitration_burst": 0, 00:16:32.907 "low_priority_weight": 0, 00:16:32.907 "medium_priority_weight": 0, 00:16:32.907 "high_priority_weight": 0, 00:16:32.907 "nvme_adminq_poll_period_us": 10000, 00:16:32.907 "nvme_ioq_poll_period_us": 0, 00:16:32.907 "io_queue_requests": 0, 00:16:32.907 "delay_cmd_submit": true, 00:16:32.907 "transport_retry_count": 4, 00:16:32.907 "bdev_retry_count": 3, 00:16:32.907 "transport_ack_timeout": 0, 00:16:32.907 "ctrlr_loss_timeout_sec": 0, 00:16:32.907 "reconnect_delay_sec": 0, 00:16:32.907 "fast_io_fail_timeout_sec": 0, 00:16:32.907 "disable_auto_failback": false, 00:16:32.907 "generate_uuids": false, 00:16:32.907 "transport_tos": 0, 00:16:32.907 "nvme_error_stat": false, 00:16:32.907 "rdma_srq_size": 0, 00:16:32.907 "io_path_stat": false, 00:16:32.907 "allow_accel_sequence": false, 00:16:32.907 "rdma_max_cq_size": 0, 00:16:32.907 "rdma_cm_event_timeout_ms": 0, 00:16:32.907 "dhchap_digests": [ 00:16:32.907 "sha256", 00:16:32.907 "sha384", 00:16:32.907 "sha512" 00:16:32.907 ], 00:16:32.907 "dhchap_dhgroups": [ 00:16:32.907 "null", 00:16:32.907 "ffdhe2048", 00:16:32.908 "ffdhe3072", 00:16:32.908 "ffdhe4096", 00:16:32.908 "ffdhe6144", 00:16:32.908 "ffdhe8192" 00:16:32.908 ] 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "bdev_nvme_set_hotplug", 00:16:32.908 "params": { 00:16:32.908 "period_us": 100000, 00:16:32.908 "enable": false 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "bdev_malloc_create", 00:16:32.908 "params": { 00:16:32.908 "name": "malloc0", 00:16:32.908 "num_blocks": 8192, 00:16:32.908 "block_size": 4096, 00:16:32.908 "physical_block_size": 4096, 00:16:32.908 "uuid": "4d61e42e-df5c-45f3-97f9-7da2871d797c", 00:16:32.908 "optimal_io_boundary": 0 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "bdev_wait_for_examine" 00:16:32.908 } 00:16:32.908 ] 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "subsystem": "nbd", 00:16:32.908 "config": [] 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "subsystem": "scheduler", 00:16:32.908 "config": [ 00:16:32.908 { 00:16:32.908 "method": "framework_set_scheduler", 00:16:32.908 "params": { 00:16:32.908 "name": "static" 00:16:32.908 } 00:16:32.908 } 00:16:32.908 ] 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "subsystem": "nvmf", 00:16:32.908 "config": [ 00:16:32.908 { 00:16:32.908 "method": "nvmf_set_config", 00:16:32.908 "params": { 00:16:32.908 "discovery_filter": "match_any", 00:16:32.908 "admin_cmd_passthru": { 00:16:32.908 "identify_ctrlr": false 00:16:32.908 } 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_set_max_subsystems", 00:16:32.908 "params": { 00:16:32.908 "max_subsystems": 1024 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_set_crdt", 00:16:32.908 "params": { 00:16:32.908 "crdt1": 0, 00:16:32.908 "crdt2": 0, 00:16:32.908 "crdt3": 0 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_create_transport", 00:16:32.908 "params": { 00:16:32.908 "trtype": "TCP", 00:16:32.908 "max_queue_depth": 128, 00:16:32.908 "max_io_qpairs_per_ctrlr": 127, 00:16:32.908 "in_capsule_data_size": 4096, 00:16:32.908 "max_io_size": 131072, 00:16:32.908 "io_unit_size": 131072, 00:16:32.908 "max_aq_depth": 128, 00:16:32.908 "num_shared_buffers": 511, 00:16:32.908 "buf_cache_size": 4294967295, 00:16:32.908 "dif_insert_or_strip": false, 00:16:32.908 "zcopy": false, 00:16:32.908 "c2h_success": false, 00:16:32.908 "sock_priority": 0, 00:16:32.908 "abort_timeout_sec": 1, 00:16:32.908 "ack_timeout": 0, 00:16:32.908 "data_wr_pool_size": 0 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_create_subsystem", 00:16:32.908 "params": { 00:16:32.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.908 "allow_any_host": false, 00:16:32.908 "serial_number": "SPDK00000000000001", 00:16:32.908 "model_number": "SPDK bdev Controller", 00:16:32.908 "max_namespaces": 10, 00:16:32.908 "min_cntlid": 1, 00:16:32.908 "max_cntlid": 65519, 00:16:32.908 "ana_reporting": false 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_subsystem_add_host", 00:16:32.908 "params": { 00:16:32.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.908 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.908 "psk": "/tmp/tmp.zs7qpKdYN5" 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_subsystem_add_ns", 00:16:32.908 "params": { 00:16:32.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.908 "namespace": { 00:16:32.908 "nsid": 1, 00:16:32.908 "bdev_name": "malloc0", 00:16:32.908 "nguid": "4D61E42EDF5C45F397F97DA2871D797C", 00:16:32.908 "uuid": "4d61e42e-df5c-45f3-97f9-7da2871d797c", 00:16:32.908 "no_auto_visible": false 00:16:32.908 } 00:16:32.908 } 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "method": "nvmf_subsystem_add_listener", 00:16:32.908 "params": { 00:16:32.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.908 "listen_address": { 00:16:32.908 "trtype": "TCP", 00:16:32.908 "adrfam": "IPv4", 00:16:32.908 "traddr": "10.0.0.2", 00:16:32.908 "trsvcid": "4420" 00:16:32.908 }, 00:16:32.908 "secure_channel": true 00:16:32.908 } 00:16:32.908 } 00:16:32.908 ] 00:16:32.908 } 00:16:32.908 ] 00:16:32.908 }' 00:16:32.908 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:33.165 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:33.165 "subsystems": [ 00:16:33.165 { 00:16:33.166 "subsystem": "keyring", 00:16:33.166 "config": [] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "iobuf", 00:16:33.166 "config": [ 00:16:33.166 { 00:16:33.166 "method": "iobuf_set_options", 00:16:33.166 "params": { 00:16:33.166 "small_pool_count": 8192, 00:16:33.166 "large_pool_count": 1024, 00:16:33.166 "small_bufsize": 8192, 00:16:33.166 "large_bufsize": 135168 00:16:33.166 } 00:16:33.166 } 00:16:33.166 ] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "sock", 00:16:33.166 "config": [ 00:16:33.166 { 00:16:33.166 "method": "sock_impl_set_options", 00:16:33.166 "params": { 00:16:33.166 "impl_name": "posix", 00:16:33.166 "recv_buf_size": 2097152, 00:16:33.166 "send_buf_size": 2097152, 00:16:33.166 "enable_recv_pipe": true, 00:16:33.166 "enable_quickack": false, 00:16:33.166 "enable_placement_id": 0, 00:16:33.166 "enable_zerocopy_send_server": true, 00:16:33.166 "enable_zerocopy_send_client": false, 00:16:33.166 "zerocopy_threshold": 0, 00:16:33.166 "tls_version": 0, 00:16:33.166 "enable_ktls": false 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "sock_impl_set_options", 00:16:33.166 "params": { 00:16:33.166 "impl_name": "ssl", 00:16:33.166 "recv_buf_size": 4096, 00:16:33.166 "send_buf_size": 4096, 00:16:33.166 "enable_recv_pipe": true, 00:16:33.166 "enable_quickack": false, 00:16:33.166 "enable_placement_id": 0, 00:16:33.166 "enable_zerocopy_send_server": true, 00:16:33.166 "enable_zerocopy_send_client": false, 00:16:33.166 "zerocopy_threshold": 0, 00:16:33.166 "tls_version": 0, 00:16:33.166 "enable_ktls": false 00:16:33.166 } 00:16:33.166 } 00:16:33.166 ] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "vmd", 00:16:33.166 "config": [] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "accel", 00:16:33.166 "config": [ 00:16:33.166 { 00:16:33.166 "method": "accel_set_options", 00:16:33.166 "params": { 00:16:33.166 "small_cache_size": 128, 00:16:33.166 "large_cache_size": 16, 00:16:33.166 "task_count": 2048, 00:16:33.166 "sequence_count": 2048, 00:16:33.166 "buf_count": 2048 00:16:33.166 } 00:16:33.166 } 00:16:33.166 ] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "bdev", 00:16:33.166 "config": [ 00:16:33.166 { 00:16:33.166 "method": "bdev_set_options", 00:16:33.166 "params": { 00:16:33.166 "bdev_io_pool_size": 65535, 00:16:33.166 "bdev_io_cache_size": 256, 00:16:33.166 "bdev_auto_examine": true, 00:16:33.166 "iobuf_small_cache_size": 128, 00:16:33.166 "iobuf_large_cache_size": 16 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_raid_set_options", 00:16:33.166 "params": { 00:16:33.166 "process_window_size_kb": 1024 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_iscsi_set_options", 00:16:33.166 "params": { 00:16:33.166 "timeout_sec": 30 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_nvme_set_options", 00:16:33.166 "params": { 00:16:33.166 "action_on_timeout": "none", 00:16:33.166 "timeout_us": 0, 00:16:33.166 "timeout_admin_us": 0, 00:16:33.166 "keep_alive_timeout_ms": 10000, 00:16:33.166 "arbitration_burst": 0, 00:16:33.166 "low_priority_weight": 0, 00:16:33.166 "medium_priority_weight": 0, 00:16:33.166 "high_priority_weight": 0, 00:16:33.166 "nvme_adminq_poll_period_us": 10000, 00:16:33.166 "nvme_ioq_poll_period_us": 0, 00:16:33.166 "io_queue_requests": 512, 00:16:33.166 "delay_cmd_submit": true, 00:16:33.166 "transport_retry_count": 4, 00:16:33.166 "bdev_retry_count": 3, 00:16:33.166 "transport_ack_timeout": 0, 00:16:33.166 "ctrlr_loss_timeout_sec": 0, 00:16:33.166 "reconnect_delay_sec": 0, 00:16:33.166 "fast_io_fail_timeout_sec": 0, 00:16:33.166 "disable_auto_failback": false, 00:16:33.166 "generate_uuids": false, 00:16:33.166 "transport_tos": 0, 00:16:33.166 "nvme_error_stat": false, 00:16:33.166 "rdma_srq_size": 0, 00:16:33.166 "io_path_stat": false, 00:16:33.166 "allow_accel_sequence": false, 00:16:33.166 "rdma_max_cq_size": 0, 00:16:33.166 "rdma_cm_event_timeout_ms": 0, 00:16:33.166 "dhchap_digests": [ 00:16:33.166 "sha256", 00:16:33.166 "sha384", 00:16:33.166 "sha512" 00:16:33.166 ], 00:16:33.166 "dhchap_dhgroups": [ 00:16:33.166 "null", 00:16:33.166 "ffdhe2048", 00:16:33.166 "ffdhe3072", 00:16:33.166 "ffdhe4096", 00:16:33.166 "ffdhe6144", 00:16:33.166 "ffdhe8192" 00:16:33.166 ] 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_nvme_attach_controller", 00:16:33.166 "params": { 00:16:33.166 "name": "TLSTEST", 00:16:33.166 "trtype": "TCP", 00:16:33.166 "adrfam": "IPv4", 00:16:33.166 "traddr": "10.0.0.2", 00:16:33.166 "trsvcid": "4420", 00:16:33.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.166 "prchk_reftag": false, 00:16:33.166 "prchk_guard": false, 00:16:33.166 "ctrlr_loss_timeout_sec": 0, 00:16:33.166 "reconnect_delay_sec": 0, 00:16:33.166 "fast_io_fail_timeout_sec": 0, 00:16:33.166 "psk": "/tmp/tmp.zs7qpKdYN5", 00:16:33.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.166 "hdgst": false, 00:16:33.166 "ddgst": false 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_nvme_set_hotplug", 00:16:33.166 "params": { 00:16:33.166 "period_us": 100000, 00:16:33.166 "enable": false 00:16:33.166 } 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "method": "bdev_wait_for_examine" 00:16:33.166 } 00:16:33.166 ] 00:16:33.166 }, 00:16:33.166 { 00:16:33.166 "subsystem": "nbd", 00:16:33.166 "config": [] 00:16:33.166 } 00:16:33.166 ] 00:16:33.166 }' 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1046999 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1046999 ']' 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1046999 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:33.166 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1046999 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1046999' 00:16:33.423 killing process with pid 1046999 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1046999 00:16:33.423 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.423 00:16:33.423 Latency(us) 00:16:33.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.423 =================================================================================================================== 00:16:33.423 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.423 [2024-05-15 03:11:04.341690] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1046999 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1046741 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1046741 ']' 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1046741 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:33.423 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1046741 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1046741' 00:16:33.682 killing process with pid 1046741 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1046741 00:16:33.682 [2024-05-15 03:11:04.593853] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:33.682 [2024-05-15 03:11:04.593891] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1046741 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.682 03:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:33.682 "subsystems": [ 00:16:33.682 { 00:16:33.682 "subsystem": "keyring", 00:16:33.682 "config": [] 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "subsystem": "iobuf", 00:16:33.682 "config": [ 00:16:33.682 { 00:16:33.682 "method": "iobuf_set_options", 00:16:33.682 "params": { 00:16:33.682 "small_pool_count": 8192, 00:16:33.682 "large_pool_count": 1024, 00:16:33.682 "small_bufsize": 8192, 00:16:33.682 "large_bufsize": 135168 00:16:33.682 } 00:16:33.682 } 00:16:33.682 ] 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "subsystem": "sock", 00:16:33.682 "config": [ 00:16:33.682 { 00:16:33.682 "method": "sock_impl_set_options", 00:16:33.682 "params": { 00:16:33.682 "impl_name": "posix", 00:16:33.682 "recv_buf_size": 2097152, 00:16:33.682 "send_buf_size": 2097152, 00:16:33.682 "enable_recv_pipe": true, 00:16:33.682 "enable_quickack": false, 00:16:33.682 "enable_placement_id": 0, 00:16:33.682 "enable_zerocopy_send_server": true, 00:16:33.682 "enable_zerocopy_send_client": false, 00:16:33.682 "zerocopy_threshold": 0, 00:16:33.682 "tls_version": 0, 00:16:33.682 "enable_ktls": false 00:16:33.682 } 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "method": "sock_impl_set_options", 00:16:33.682 "params": { 00:16:33.682 "impl_name": "ssl", 00:16:33.682 "recv_buf_size": 4096, 00:16:33.682 "send_buf_size": 4096, 00:16:33.682 "enable_recv_pipe": true, 00:16:33.682 "enable_quickack": false, 00:16:33.682 "enable_placement_id": 0, 00:16:33.682 "enable_zerocopy_send_server": true, 00:16:33.682 "enable_zerocopy_send_client": false, 00:16:33.682 "zerocopy_threshold": 0, 00:16:33.682 "tls_version": 0, 00:16:33.682 "enable_ktls": false 00:16:33.682 } 00:16:33.682 } 00:16:33.682 ] 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "subsystem": "vmd", 00:16:33.682 "config": [] 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "subsystem": "accel", 00:16:33.682 "config": [ 00:16:33.682 { 00:16:33.682 "method": "accel_set_options", 00:16:33.682 "params": { 00:16:33.682 "small_cache_size": 128, 00:16:33.682 "large_cache_size": 16, 00:16:33.682 "task_count": 2048, 00:16:33.682 "sequence_count": 2048, 00:16:33.682 "buf_count": 2048 00:16:33.682 } 00:16:33.682 } 00:16:33.682 ] 00:16:33.682 }, 00:16:33.682 { 00:16:33.682 "subsystem": "bdev", 00:16:33.682 "config": [ 00:16:33.682 { 00:16:33.682 "method": "bdev_set_options", 00:16:33.682 "params": { 00:16:33.682 "bdev_io_pool_size": 65535, 00:16:33.682 "bdev_io_cache_size": 256, 00:16:33.683 "bdev_auto_examine": true, 00:16:33.683 "iobuf_small_cache_size": 128, 00:16:33.683 "iobuf_large_cache_size": 16 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_raid_set_options", 00:16:33.683 "params": { 00:16:33.683 "process_window_size_kb": 1024 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_iscsi_set_options", 00:16:33.683 "params": { 00:16:33.683 "timeout_sec": 30 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_nvme_set_options", 00:16:33.683 "params": { 00:16:33.683 "action_on_timeout": "none", 00:16:33.683 "timeout_us": 0, 00:16:33.683 "timeout_admin_us": 0, 00:16:33.683 "keep_alive_timeout_ms": 10000, 00:16:33.683 "arbitration_burst": 0, 00:16:33.683 "low_priority_weight": 0, 00:16:33.683 "medium_priority_weight": 0, 00:16:33.683 "high_priority_weight": 0, 00:16:33.683 "nvme_adminq_poll_period_us": 10000, 00:16:33.683 "nvme_ioq_poll_period_us": 0, 00:16:33.683 "io_queue_requests": 0, 00:16:33.683 "delay_cmd_submit": true, 00:16:33.683 "transport_retry_count": 4, 00:16:33.683 "bdev_retry_count": 3, 00:16:33.683 "transport_ack_timeout": 0, 00:16:33.683 "ctrlr_loss_timeout_sec": 0, 00:16:33.683 "reconnect_delay_sec": 0, 00:16:33.683 "fast_io_fail_timeout_sec": 0, 00:16:33.683 "disable_auto_failback": false, 00:16:33.683 "generate_uuids": false, 00:16:33.683 "transport_tos": 0, 00:16:33.683 "nvme_error_stat": false, 00:16:33.683 "rdma_srq_size": 0, 00:16:33.683 "io_path_stat": false, 00:16:33.683 "allow_accel_sequence": false, 00:16:33.683 "rdma_max_cq_size": 0, 00:16:33.683 "rdma_cm_event_timeout_ms": 0, 00:16:33.683 "dhchap_digests": [ 00:16:33.683 "sha256", 00:16:33.683 "sha384", 00:16:33.683 "sha512" 00:16:33.683 ], 00:16:33.683 "dhchap_dhgroups": [ 00:16:33.683 "null", 00:16:33.683 "ffdhe2048", 00:16:33.683 "ffdhe3072", 00:16:33.683 "ffdhe4096", 00:16:33.683 "ffdhe6144", 00:16:33.683 "ffdhe8192" 00:16:33.683 ] 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_nvme_set_hotplug", 00:16:33.683 "params": { 00:16:33.683 "period_us": 100000, 00:16:33.683 "enable": false 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_malloc_create", 00:16:33.683 "params": { 00:16:33.683 "name": "malloc0", 00:16:33.683 "num_blocks": 8192, 00:16:33.683 "block_size": 4096, 00:16:33.683 "physical_block_size": 4096, 00:16:33.683 "uuid": "4d61e42e-df5c-45f3-97f9-7da2871d797c", 00:16:33.683 "optimal_io_boundary": 0 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "bdev_wait_for_examine" 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "subsystem": "nbd", 00:16:33.683 "config": [] 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "subsystem": "scheduler", 00:16:33.683 "config": [ 00:16:33.683 { 00:16:33.683 "method": "framework_set_scheduler", 00:16:33.683 "params": { 00:16:33.683 "name": "static" 00:16:33.683 } 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "subsystem": "nvmf", 00:16:33.683 "config": [ 00:16:33.683 { 00:16:33.683 "method": "nvmf_set_config", 00:16:33.683 "params": { 00:16:33.683 "discovery_filter": "match_any", 00:16:33.683 "admin_cmd_passthru": { 00:16:33.683 "identify_ctrlr": false 00:16:33.683 } 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_set_max_subsystems", 00:16:33.683 "params": { 00:16:33.683 "max_subsystems": 1024 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_set_crdt", 00:16:33.683 "params": { 00:16:33.683 "crdt1": 0, 00:16:33.683 "crdt2": 0, 00:16:33.683 "crdt3": 0 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_create_transport", 00:16:33.683 "params": { 00:16:33.683 "trtype": "TCP", 00:16:33.683 "max_queue_depth": 128, 00:16:33.683 "max_io_qpairs_per_ctrlr": 127, 00:16:33.683 "in_capsule_data_size": 4096, 00:16:33.683 "max_io_size": 131072, 00:16:33.683 "io_unit_size": 131072, 00:16:33.683 "max_aq_depth": 128, 00:16:33.683 "num_shared_buffers": 511, 00:16:33.683 "buf_cache_size": 4294967295, 00:16:33.683 "dif_insert_or_strip": false, 00:16:33.683 "zcopy": false, 00:16:33.683 "c2h_success": false, 00:16:33.683 "sock_priority": 0, 00:16:33.683 "abort_timeout_sec": 1, 00:16:33.683 "ack_timeout": 0, 00:16:33.683 "data_wr_pool_size": 0 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_create_subsystem", 00:16:33.683 "params": { 00:16:33.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.683 "allow_any_host": false, 00:16:33.683 "serial_number": "SPDK00000000000001", 00:16:33.683 "model_number": "SPDK bdev Controller", 00:16:33.683 "max_namespaces": 10, 00:16:33.683 "min_cntlid": 1, 00:16:33.683 "max_cntlid": 65519, 00:16:33.683 "ana_reporting": false 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_subsystem_add_host", 00:16:33.683 "params": { 00:16:33.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.683 "host": "nqn.2016-06.io.spdk:host1", 00:16:33.683 "psk": "/tmp/tmp.zs7qpKdYN5" 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_subsystem_add_ns", 00:16:33.683 "params": { 00:16:33.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.683 "namespace": { 00:16:33.683 "nsid": 1, 00:16:33.683 "bdev_name": "malloc0", 00:16:33.683 "nguid": "4D61E42EDF5C45F397F97DA2871D797C", 00:16:33.683 "uuid": "4d61e42e-df5c-45f3-97f9-7da2871d797c", 00:16:33.683 "no_auto_visible": false 00:16:33.683 } 00:16:33.683 } 00:16:33.683 }, 00:16:33.683 { 00:16:33.683 "method": "nvmf_subsystem_add_listener", 00:16:33.683 "params": { 00:16:33.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.683 "listen_address": { 00:16:33.683 "trtype": "TCP", 00:16:33.683 "adrfam": "IPv4", 00:16:33.683 "traddr": "10.0.0.2", 00:16:33.683 "trsvcid": "4420" 00:16:33.683 }, 00:16:33.683 "secure_channel": true 00:16:33.683 } 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 } 00:16:33.683 ] 00:16:33.683 }' 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1047425 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1047425 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1047425 ']' 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.683 03:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.942 [2024-05-15 03:11:04.863837] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:33.942 [2024-05-15 03:11:04.863885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.942 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.942 [2024-05-15 03:11:04.921045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.942 [2024-05-15 03:11:04.992833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.942 [2024-05-15 03:11:04.992868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.942 [2024-05-15 03:11:04.992875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.942 [2024-05-15 03:11:04.992881] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.942 [2024-05-15 03:11:04.992886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.942 [2024-05-15 03:11:04.992935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.199 [2024-05-15 03:11:05.187987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.199 [2024-05-15 03:11:05.203960] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:34.199 [2024-05-15 03:11:05.219997] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:34.199 [2024-05-15 03:11:05.220046] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.199 [2024-05-15 03:11:05.230640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1047501 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1047501 /var/tmp/bdevperf.sock 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1047501 ']' 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.764 03:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:34.765 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:34.765 03:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:34.765 "subsystems": [ 00:16:34.765 { 00:16:34.765 "subsystem": "keyring", 00:16:34.765 "config": [] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "iobuf", 00:16:34.765 "config": [ 00:16:34.765 { 00:16:34.765 "method": "iobuf_set_options", 00:16:34.765 "params": { 00:16:34.765 "small_pool_count": 8192, 00:16:34.765 "large_pool_count": 1024, 00:16:34.765 "small_bufsize": 8192, 00:16:34.765 "large_bufsize": 135168 00:16:34.765 } 00:16:34.765 } 00:16:34.765 ] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "sock", 00:16:34.765 "config": [ 00:16:34.765 { 00:16:34.765 "method": "sock_impl_set_options", 00:16:34.765 "params": { 00:16:34.765 "impl_name": "posix", 00:16:34.765 "recv_buf_size": 2097152, 00:16:34.765 "send_buf_size": 2097152, 00:16:34.765 "enable_recv_pipe": true, 00:16:34.765 "enable_quickack": false, 00:16:34.765 "enable_placement_id": 0, 00:16:34.765 "enable_zerocopy_send_server": true, 00:16:34.765 "enable_zerocopy_send_client": false, 00:16:34.765 "zerocopy_threshold": 0, 00:16:34.765 "tls_version": 0, 00:16:34.765 "enable_ktls": false 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "sock_impl_set_options", 00:16:34.765 "params": { 00:16:34.765 "impl_name": "ssl", 00:16:34.765 "recv_buf_size": 4096, 00:16:34.765 "send_buf_size": 4096, 00:16:34.765 "enable_recv_pipe": true, 00:16:34.765 "enable_quickack": false, 00:16:34.765 "enable_placement_id": 0, 00:16:34.765 "enable_zerocopy_send_server": true, 00:16:34.765 "enable_zerocopy_send_client": false, 00:16:34.765 "zerocopy_threshold": 0, 00:16:34.765 "tls_version": 0, 00:16:34.765 "enable_ktls": false 00:16:34.765 } 00:16:34.765 } 00:16:34.765 ] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "vmd", 00:16:34.765 "config": [] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "accel", 00:16:34.765 "config": [ 00:16:34.765 { 00:16:34.765 "method": "accel_set_options", 00:16:34.765 "params": { 00:16:34.765 "small_cache_size": 128, 00:16:34.765 "large_cache_size": 16, 00:16:34.765 "task_count": 2048, 00:16:34.765 "sequence_count": 2048, 00:16:34.765 "buf_count": 2048 00:16:34.765 } 00:16:34.765 } 00:16:34.765 ] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "bdev", 00:16:34.765 "config": [ 00:16:34.765 { 00:16:34.765 "method": "bdev_set_options", 00:16:34.765 "params": { 00:16:34.765 "bdev_io_pool_size": 65535, 00:16:34.765 "bdev_io_cache_size": 256, 00:16:34.765 "bdev_auto_examine": true, 00:16:34.765 "iobuf_small_cache_size": 128, 00:16:34.765 "iobuf_large_cache_size": 16 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_raid_set_options", 00:16:34.765 "params": { 00:16:34.765 "process_window_size_kb": 1024 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_iscsi_set_options", 00:16:34.765 "params": { 00:16:34.765 "timeout_sec": 30 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_nvme_set_options", 00:16:34.765 "params": { 00:16:34.765 "action_on_timeout": "none", 00:16:34.765 "timeout_us": 0, 00:16:34.765 "timeout_admin_us": 0, 00:16:34.765 "keep_alive_timeout_ms": 10000, 00:16:34.765 "arbitration_burst": 0, 00:16:34.765 "low_priority_weight": 0, 00:16:34.765 "medium_priority_weight": 0, 00:16:34.765 "high_priority_weight": 0, 00:16:34.765 "nvme_adminq_poll_period_us": 10000, 00:16:34.765 "nvme_ioq_poll_period_us": 0, 00:16:34.765 "io_queue_requests": 512, 00:16:34.765 "delay_cmd_submit": true, 00:16:34.765 "transport_retry_count": 4, 00:16:34.765 "bdev_retry_count": 3, 00:16:34.765 "transport_ack_timeout": 0, 00:16:34.765 "ctrlr_loss_timeout_sec": 0, 00:16:34.765 "reconnect_delay_sec": 0, 00:16:34.765 "fast_io_fail_timeout_sec": 0, 00:16:34.765 "disable_auto_failback": false, 00:16:34.765 "generate_uuids": false, 00:16:34.765 "transport_tos": 0, 00:16:34.765 "nvme_error_stat": false, 00:16:34.765 "rdma_srq_size": 0, 00:16:34.765 "io_path_stat": false, 00:16:34.765 "allow_accel_sequence": false, 00:16:34.765 "rdma_max_cq_size": 0, 00:16:34.765 "rdma_cm_event_timeout_ms": 0, 00:16:34.765 "dhchap_digests": [ 00:16:34.765 "sha256", 00:16:34.765 "sha384", 00:16:34.765 "sha512" 00:16:34.765 ], 00:16:34.765 "dhchap_dhgroups": [ 00:16:34.765 "null", 00:16:34.765 "ffdhe2048", 00:16:34.765 "ffdhe3072", 00:16:34.765 "ffdhe4096", 00:16:34.765 "ffdhe6144", 00:16:34.765 "ffdhe8192" 00:16:34.765 ] 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_nvme_attach_controller", 00:16:34.765 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.765 "params": { 00:16:34.765 "name": "TLSTEST", 00:16:34.765 "trtype": "TCP", 00:16:34.765 "adrfam": "IPv4", 00:16:34.765 "traddr": "10.0.0.2", 00:16:34.765 "trsvcid": "4420", 00:16:34.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.765 "prchk_reftag": false, 00:16:34.765 "prchk_guard": false, 00:16:34.765 "ctrlr_loss_timeout_sec": 0, 00:16:34.765 "reconnect_delay_sec": 0, 00:16:34.765 "fast_io_fail_timeout_sec": 0, 00:16:34.765 "psk": "/tmp/tmp.zs7qpKdYN5", 00:16:34.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.765 "hdgst": false, 00:16:34.765 "ddgst": false 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_nvme_set_hotplug", 00:16:34.765 "params": { 00:16:34.765 "period_us": 100000, 00:16:34.765 "enable": false 00:16:34.765 } 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "method": "bdev_wait_for_examine" 00:16:34.765 } 00:16:34.765 ] 00:16:34.765 }, 00:16:34.765 { 00:16:34.765 "subsystem": "nbd", 00:16:34.765 "config": [] 00:16:34.765 } 00:16:34.765 ] 00:16:34.765 }' 00:16:34.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.765 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:34.765 03:11:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.765 [2024-05-15 03:11:05.737661] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:34.765 [2024-05-15 03:11:05.737709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047501 ] 00:16:34.765 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.765 [2024-05-15 03:11:05.787587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.765 [2024-05-15 03:11:05.860456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.024 [2024-05-15 03:11:05.994218] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.024 [2024-05-15 03:11:05.994298] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:35.589 03:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:35.589 03:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:35.589 03:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:35.589 Running I/O for 10 seconds... 00:16:45.556 00:16:45.556 Latency(us) 00:16:45.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.556 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:45.556 Verification LBA range: start 0x0 length 0x2000 00:16:45.556 TLSTESTn1 : 10.03 4247.73 16.59 0.00 0.00 30075.28 7066.49 47641.82 00:16:45.556 =================================================================================================================== 00:16:45.556 Total : 4247.73 16.59 0.00 0.00 30075.28 7066.49 47641.82 00:16:45.556 0 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1047501 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1047501 ']' 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1047501 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:45.556 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1047501 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1047501' 00:16:45.813 killing process with pid 1047501 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1047501 00:16:45.813 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.813 00:16:45.813 Latency(us) 00:16:45.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.813 =================================================================================================================== 00:16:45.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.813 [2024-05-15 03:11:16.724701] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1047501 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1047425 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1047425 ']' 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1047425 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1047425 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:45.813 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:46.071 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1047425' 00:16:46.071 killing process with pid 1047425 00:16:46.071 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1047425 00:16:46.071 [2024-05-15 03:11:16.976003] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:46.071 [2024-05-15 03:11:16.976044] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:46.071 03:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1047425 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1049359 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1049359 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1049359 ']' 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:46.071 03:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.329 [2024-05-15 03:11:17.247209] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:46.329 [2024-05-15 03:11:17.247257] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.329 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.329 [2024-05-15 03:11:17.303955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.329 [2024-05-15 03:11:17.375924] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.329 [2024-05-15 03:11:17.375964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.329 [2024-05-15 03:11:17.375972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.329 [2024-05-15 03:11:17.375978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.329 [2024-05-15 03:11:17.375983] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.329 [2024-05-15 03:11:17.376017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.894 03:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:46.894 03:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:46.894 03:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.894 03:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.894 03:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.152 03:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.152 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.zs7qpKdYN5 00:16:47.152 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zs7qpKdYN5 00:16:47.152 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:47.152 [2024-05-15 03:11:18.235122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.152 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:47.410 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:47.410 [2024-05-15 03:11:18.571981] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:47.410 [2024-05-15 03:11:18.572036] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.410 [2024-05-15 03:11:18.572217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.668 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.668 malloc0 00:16:47.668 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:47.926 03:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zs7qpKdYN5 00:16:48.184 [2024-05-15 03:11:19.109751] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1049819 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1049819 /var/tmp/bdevperf.sock 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1049819 ']' 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:48.184 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.184 [2024-05-15 03:11:19.161237] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:48.184 [2024-05-15 03:11:19.161283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049819 ] 00:16:48.184 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.184 [2024-05-15 03:11:19.214460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.184 [2024-05-15 03:11:19.286652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.116 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.116 03:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:49.116 03:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zs7qpKdYN5 00:16:49.116 03:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:49.374 [2024-05-15 03:11:20.302380] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.374 nvme0n1 00:16:49.374 03:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:49.374 Running I/O for 1 seconds... 00:16:50.365 00:16:50.365 Latency(us) 00:16:50.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.365 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:50.365 Verification LBA range: start 0x0 length 0x2000 00:16:50.365 nvme0n1 : 1.02 4812.81 18.80 0.00 0.00 26361.45 5955.23 31685.23 00:16:50.365 =================================================================================================================== 00:16:50.365 Total : 4812.81 18.80 0.00 0.00 26361.45 5955.23 31685.23 00:16:50.365 0 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1049819 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1049819 ']' 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1049819 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:50.365 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1049819 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1049819' 00:16:50.623 killing process with pid 1049819 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1049819 00:16:50.623 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.623 00:16:50.623 Latency(us) 00:16:50.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.623 =================================================================================================================== 00:16:50.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1049819 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1049359 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1049359 ']' 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1049359 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:50.623 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1049359 00:16:50.881 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:50.881 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:50.881 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1049359' 00:16:50.881 killing process with pid 1049359 00:16:50.881 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1049359 00:16:50.881 [2024-05-15 03:11:21.801306] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:50.881 [2024-05-15 03:11:21.801344] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:50.881 03:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1049359 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1050297 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1050297 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1050297 ']' 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:50.881 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.139 [2024-05-15 03:11:22.065927] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:51.139 [2024-05-15 03:11:22.065973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.139 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.139 [2024-05-15 03:11:22.120217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.139 [2024-05-15 03:11:22.197495] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.139 [2024-05-15 03:11:22.197535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.139 [2024-05-15 03:11:22.197542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.139 [2024-05-15 03:11:22.197547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.139 [2024-05-15 03:11:22.197552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.139 [2024-05-15 03:11:22.197570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 [2024-05-15 03:11:22.916716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.072 malloc0 00:16:52.072 [2024-05-15 03:11:22.944959] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:52.072 [2024-05-15 03:11:22.945014] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.072 [2024-05-15 03:11:22.945191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1050441 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1050441 /var/tmp/bdevperf.sock 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1050441 ']' 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.072 03:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 [2024-05-15 03:11:23.003955] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:52.072 [2024-05-15 03:11:23.003996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050441 ] 00:16:52.072 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.072 [2024-05-15 03:11:23.057475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.072 [2024-05-15 03:11:23.136223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.005 03:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.005 03:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:53.005 03:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zs7qpKdYN5 00:16:53.005 03:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:53.005 [2024-05-15 03:11:24.132192] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.262 nvme0n1 00:16:53.262 03:11:24 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.262 Running I/O for 1 seconds... 00:16:54.195 00:16:54.195 Latency(us) 00:16:54.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:54.195 Verification LBA range: start 0x0 length 0x2000 00:16:54.195 nvme0n1 : 1.01 5186.17 20.26 0.00 0.00 24495.17 6411.13 32369.09 00:16:54.195 =================================================================================================================== 00:16:54.195 Total : 5186.17 20.26 0.00 0.00 24495.17 6411.13 32369.09 00:16:54.195 0 00:16:54.195 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:54.195 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.195 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.453 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.453 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:54.453 "subsystems": [ 00:16:54.453 { 00:16:54.453 "subsystem": "keyring", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "keyring_file_add_key", 00:16:54.453 "params": { 00:16:54.453 "name": "key0", 00:16:54.453 "path": "/tmp/tmp.zs7qpKdYN5" 00:16:54.453 } 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "iobuf", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "iobuf_set_options", 00:16:54.453 "params": { 00:16:54.453 "small_pool_count": 8192, 00:16:54.453 "large_pool_count": 1024, 00:16:54.453 "small_bufsize": 8192, 00:16:54.453 "large_bufsize": 135168 00:16:54.453 } 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "sock", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "sock_impl_set_options", 00:16:54.453 "params": { 00:16:54.453 "impl_name": "posix", 00:16:54.453 "recv_buf_size": 2097152, 00:16:54.453 "send_buf_size": 2097152, 00:16:54.453 "enable_recv_pipe": true, 00:16:54.453 "enable_quickack": false, 00:16:54.453 "enable_placement_id": 0, 00:16:54.453 "enable_zerocopy_send_server": true, 00:16:54.453 "enable_zerocopy_send_client": false, 00:16:54.453 "zerocopy_threshold": 0, 00:16:54.453 "tls_version": 0, 00:16:54.453 "enable_ktls": false 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "sock_impl_set_options", 00:16:54.453 "params": { 00:16:54.453 "impl_name": "ssl", 00:16:54.453 "recv_buf_size": 4096, 00:16:54.453 "send_buf_size": 4096, 00:16:54.453 "enable_recv_pipe": true, 00:16:54.453 "enable_quickack": false, 00:16:54.453 "enable_placement_id": 0, 00:16:54.453 "enable_zerocopy_send_server": true, 00:16:54.453 "enable_zerocopy_send_client": false, 00:16:54.453 "zerocopy_threshold": 0, 00:16:54.453 "tls_version": 0, 00:16:54.453 "enable_ktls": false 00:16:54.453 } 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "vmd", 00:16:54.453 "config": [] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "accel", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "accel_set_options", 00:16:54.453 "params": { 00:16:54.453 "small_cache_size": 128, 00:16:54.453 "large_cache_size": 16, 00:16:54.453 "task_count": 2048, 00:16:54.453 "sequence_count": 2048, 00:16:54.453 "buf_count": 2048 00:16:54.453 } 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "bdev", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "bdev_set_options", 00:16:54.453 "params": { 00:16:54.453 "bdev_io_pool_size": 65535, 00:16:54.453 "bdev_io_cache_size": 256, 00:16:54.453 "bdev_auto_examine": true, 00:16:54.453 "iobuf_small_cache_size": 128, 00:16:54.453 "iobuf_large_cache_size": 16 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_raid_set_options", 00:16:54.453 "params": { 00:16:54.453 "process_window_size_kb": 1024 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_iscsi_set_options", 00:16:54.453 "params": { 00:16:54.453 "timeout_sec": 30 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_nvme_set_options", 00:16:54.453 "params": { 00:16:54.453 "action_on_timeout": "none", 00:16:54.453 "timeout_us": 0, 00:16:54.453 "timeout_admin_us": 0, 00:16:54.453 "keep_alive_timeout_ms": 10000, 00:16:54.453 "arbitration_burst": 0, 00:16:54.453 "low_priority_weight": 0, 00:16:54.453 "medium_priority_weight": 0, 00:16:54.453 "high_priority_weight": 0, 00:16:54.453 "nvme_adminq_poll_period_us": 10000, 00:16:54.453 "nvme_ioq_poll_period_us": 0, 00:16:54.453 "io_queue_requests": 0, 00:16:54.453 "delay_cmd_submit": true, 00:16:54.453 "transport_retry_count": 4, 00:16:54.453 "bdev_retry_count": 3, 00:16:54.453 "transport_ack_timeout": 0, 00:16:54.453 "ctrlr_loss_timeout_sec": 0, 00:16:54.453 "reconnect_delay_sec": 0, 00:16:54.453 "fast_io_fail_timeout_sec": 0, 00:16:54.453 "disable_auto_failback": false, 00:16:54.453 "generate_uuids": false, 00:16:54.453 "transport_tos": 0, 00:16:54.453 "nvme_error_stat": false, 00:16:54.453 "rdma_srq_size": 0, 00:16:54.453 "io_path_stat": false, 00:16:54.453 "allow_accel_sequence": false, 00:16:54.453 "rdma_max_cq_size": 0, 00:16:54.453 "rdma_cm_event_timeout_ms": 0, 00:16:54.453 "dhchap_digests": [ 00:16:54.453 "sha256", 00:16:54.453 "sha384", 00:16:54.453 "sha512" 00:16:54.453 ], 00:16:54.453 "dhchap_dhgroups": [ 00:16:54.453 "null", 00:16:54.453 "ffdhe2048", 00:16:54.453 "ffdhe3072", 00:16:54.453 "ffdhe4096", 00:16:54.453 "ffdhe6144", 00:16:54.453 "ffdhe8192" 00:16:54.453 ] 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_nvme_set_hotplug", 00:16:54.453 "params": { 00:16:54.453 "period_us": 100000, 00:16:54.453 "enable": false 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_malloc_create", 00:16:54.453 "params": { 00:16:54.453 "name": "malloc0", 00:16:54.453 "num_blocks": 8192, 00:16:54.453 "block_size": 4096, 00:16:54.453 "physical_block_size": 4096, 00:16:54.453 "uuid": "8b15749a-07bd-4cad-97e5-5e2c299653fc", 00:16:54.453 "optimal_io_boundary": 0 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "bdev_wait_for_examine" 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "nbd", 00:16:54.453 "config": [] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "scheduler", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "framework_set_scheduler", 00:16:54.453 "params": { 00:16:54.453 "name": "static" 00:16:54.453 } 00:16:54.453 } 00:16:54.453 ] 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "subsystem": "nvmf", 00:16:54.453 "config": [ 00:16:54.453 { 00:16:54.453 "method": "nvmf_set_config", 00:16:54.453 "params": { 00:16:54.453 "discovery_filter": "match_any", 00:16:54.453 "admin_cmd_passthru": { 00:16:54.453 "identify_ctrlr": false 00:16:54.453 } 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "nvmf_set_max_subsystems", 00:16:54.453 "params": { 00:16:54.453 "max_subsystems": 1024 00:16:54.453 } 00:16:54.453 }, 00:16:54.453 { 00:16:54.453 "method": "nvmf_set_crdt", 00:16:54.453 "params": { 00:16:54.453 "crdt1": 0, 00:16:54.454 "crdt2": 0, 00:16:54.454 "crdt3": 0 00:16:54.454 } 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "method": "nvmf_create_transport", 00:16:54.454 "params": { 00:16:54.454 "trtype": "TCP", 00:16:54.454 "max_queue_depth": 128, 00:16:54.454 "max_io_qpairs_per_ctrlr": 127, 00:16:54.454 "in_capsule_data_size": 4096, 00:16:54.454 "max_io_size": 131072, 00:16:54.454 "io_unit_size": 131072, 00:16:54.454 "max_aq_depth": 128, 00:16:54.454 "num_shared_buffers": 511, 00:16:54.454 "buf_cache_size": 4294967295, 00:16:54.454 "dif_insert_or_strip": false, 00:16:54.454 "zcopy": false, 00:16:54.454 "c2h_success": false, 00:16:54.454 "sock_priority": 0, 00:16:54.454 "abort_timeout_sec": 1, 00:16:54.454 "ack_timeout": 0, 00:16:54.454 "data_wr_pool_size": 0 00:16:54.454 } 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "method": "nvmf_create_subsystem", 00:16:54.454 "params": { 00:16:54.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.454 "allow_any_host": false, 00:16:54.454 "serial_number": "00000000000000000000", 00:16:54.454 "model_number": "SPDK bdev Controller", 00:16:54.454 "max_namespaces": 32, 00:16:54.454 "min_cntlid": 1, 00:16:54.454 "max_cntlid": 65519, 00:16:54.454 "ana_reporting": false 00:16:54.454 } 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "method": "nvmf_subsystem_add_host", 00:16:54.454 "params": { 00:16:54.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.454 "host": "nqn.2016-06.io.spdk:host1", 00:16:54.454 "psk": "key0" 00:16:54.454 } 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "method": "nvmf_subsystem_add_ns", 00:16:54.454 "params": { 00:16:54.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.454 "namespace": { 00:16:54.454 "nsid": 1, 00:16:54.454 "bdev_name": "malloc0", 00:16:54.454 "nguid": "8B15749A07BD4CAD97E55E2C299653FC", 00:16:54.454 "uuid": "8b15749a-07bd-4cad-97e5-5e2c299653fc", 00:16:54.454 "no_auto_visible": false 00:16:54.454 } 00:16:54.454 } 00:16:54.454 }, 00:16:54.454 { 00:16:54.454 "method": "nvmf_subsystem_add_listener", 00:16:54.454 "params": { 00:16:54.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.454 "listen_address": { 00:16:54.454 "trtype": "TCP", 00:16:54.454 "adrfam": "IPv4", 00:16:54.454 "traddr": "10.0.0.2", 00:16:54.454 "trsvcid": "4420" 00:16:54.454 }, 00:16:54.454 "secure_channel": true 00:16:54.454 } 00:16:54.454 } 00:16:54.454 ] 00:16:54.454 } 00:16:54.454 ] 00:16:54.454 }' 00:16:54.454 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:54.712 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:54.712 "subsystems": [ 00:16:54.712 { 00:16:54.712 "subsystem": "keyring", 00:16:54.712 "config": [ 00:16:54.712 { 00:16:54.712 "method": "keyring_file_add_key", 00:16:54.712 "params": { 00:16:54.712 "name": "key0", 00:16:54.712 "path": "/tmp/tmp.zs7qpKdYN5" 00:16:54.712 } 00:16:54.712 } 00:16:54.712 ] 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "subsystem": "iobuf", 00:16:54.712 "config": [ 00:16:54.712 { 00:16:54.712 "method": "iobuf_set_options", 00:16:54.712 "params": { 00:16:54.712 "small_pool_count": 8192, 00:16:54.712 "large_pool_count": 1024, 00:16:54.712 "small_bufsize": 8192, 00:16:54.712 "large_bufsize": 135168 00:16:54.712 } 00:16:54.712 } 00:16:54.712 ] 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "subsystem": "sock", 00:16:54.712 "config": [ 00:16:54.712 { 00:16:54.712 "method": "sock_impl_set_options", 00:16:54.712 "params": { 00:16:54.712 "impl_name": "posix", 00:16:54.712 "recv_buf_size": 2097152, 00:16:54.712 "send_buf_size": 2097152, 00:16:54.712 "enable_recv_pipe": true, 00:16:54.712 "enable_quickack": false, 00:16:54.712 "enable_placement_id": 0, 00:16:54.712 "enable_zerocopy_send_server": true, 00:16:54.712 "enable_zerocopy_send_client": false, 00:16:54.712 "zerocopy_threshold": 0, 00:16:54.712 "tls_version": 0, 00:16:54.712 "enable_ktls": false 00:16:54.712 } 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "method": "sock_impl_set_options", 00:16:54.712 "params": { 00:16:54.712 "impl_name": "ssl", 00:16:54.712 "recv_buf_size": 4096, 00:16:54.712 "send_buf_size": 4096, 00:16:54.712 "enable_recv_pipe": true, 00:16:54.712 "enable_quickack": false, 00:16:54.712 "enable_placement_id": 0, 00:16:54.712 "enable_zerocopy_send_server": true, 00:16:54.712 "enable_zerocopy_send_client": false, 00:16:54.712 "zerocopy_threshold": 0, 00:16:54.712 "tls_version": 0, 00:16:54.712 "enable_ktls": false 00:16:54.712 } 00:16:54.712 } 00:16:54.712 ] 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "subsystem": "vmd", 00:16:54.712 "config": [] 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "subsystem": "accel", 00:16:54.712 "config": [ 00:16:54.712 { 00:16:54.712 "method": "accel_set_options", 00:16:54.712 "params": { 00:16:54.712 "small_cache_size": 128, 00:16:54.712 "large_cache_size": 16, 00:16:54.712 "task_count": 2048, 00:16:54.712 "sequence_count": 2048, 00:16:54.712 "buf_count": 2048 00:16:54.712 } 00:16:54.712 } 00:16:54.712 ] 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "subsystem": "bdev", 00:16:54.712 "config": [ 00:16:54.712 { 00:16:54.712 "method": "bdev_set_options", 00:16:54.712 "params": { 00:16:54.712 "bdev_io_pool_size": 65535, 00:16:54.712 "bdev_io_cache_size": 256, 00:16:54.712 "bdev_auto_examine": true, 00:16:54.712 "iobuf_small_cache_size": 128, 00:16:54.712 "iobuf_large_cache_size": 16 00:16:54.712 } 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "method": "bdev_raid_set_options", 00:16:54.712 "params": { 00:16:54.712 "process_window_size_kb": 1024 00:16:54.712 } 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "method": "bdev_iscsi_set_options", 00:16:54.712 "params": { 00:16:54.712 "timeout_sec": 30 00:16:54.712 } 00:16:54.712 }, 00:16:54.712 { 00:16:54.712 "method": "bdev_nvme_set_options", 00:16:54.712 "params": { 00:16:54.712 "action_on_timeout": "none", 00:16:54.712 "timeout_us": 0, 00:16:54.712 "timeout_admin_us": 0, 00:16:54.712 "keep_alive_timeout_ms": 10000, 00:16:54.712 "arbitration_burst": 0, 00:16:54.712 "low_priority_weight": 0, 00:16:54.712 "medium_priority_weight": 0, 00:16:54.712 "high_priority_weight": 0, 00:16:54.712 "nvme_adminq_poll_period_us": 10000, 00:16:54.712 "nvme_ioq_poll_period_us": 0, 00:16:54.712 "io_queue_requests": 512, 00:16:54.712 "delay_cmd_submit": true, 00:16:54.712 "transport_retry_count": 4, 00:16:54.712 "bdev_retry_count": 3, 00:16:54.712 "transport_ack_timeout": 0, 00:16:54.712 "ctrlr_loss_timeout_sec": 0, 00:16:54.712 "reconnect_delay_sec": 0, 00:16:54.712 "fast_io_fail_timeout_sec": 0, 00:16:54.712 "disable_auto_failback": false, 00:16:54.712 "generate_uuids": false, 00:16:54.712 "transport_tos": 0, 00:16:54.712 "nvme_error_stat": false, 00:16:54.712 "rdma_srq_size": 0, 00:16:54.712 "io_path_stat": false, 00:16:54.712 "allow_accel_sequence": false, 00:16:54.712 "rdma_max_cq_size": 0, 00:16:54.712 "rdma_cm_event_timeout_ms": 0, 00:16:54.712 "dhchap_digests": [ 00:16:54.712 "sha256", 00:16:54.712 "sha384", 00:16:54.712 "sha512" 00:16:54.712 ], 00:16:54.713 "dhchap_dhgroups": [ 00:16:54.713 "null", 00:16:54.713 "ffdhe2048", 00:16:54.713 "ffdhe3072", 00:16:54.713 "ffdhe4096", 00:16:54.713 "ffdhe6144", 00:16:54.713 "ffdhe8192" 00:16:54.713 ] 00:16:54.713 } 00:16:54.713 }, 00:16:54.713 { 00:16:54.713 "method": "bdev_nvme_attach_controller", 00:16:54.713 "params": { 00:16:54.713 "name": "nvme0", 00:16:54.713 "trtype": "TCP", 00:16:54.713 "adrfam": "IPv4", 00:16:54.713 "traddr": "10.0.0.2", 00:16:54.713 "trsvcid": "4420", 00:16:54.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.713 "prchk_reftag": false, 00:16:54.713 "prchk_guard": false, 00:16:54.713 "ctrlr_loss_timeout_sec": 0, 00:16:54.713 "reconnect_delay_sec": 0, 00:16:54.713 "fast_io_fail_timeout_sec": 0, 00:16:54.713 "psk": "key0", 00:16:54.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.713 "hdgst": false, 00:16:54.713 "ddgst": false 00:16:54.713 } 00:16:54.713 }, 00:16:54.713 { 00:16:54.713 "method": "bdev_nvme_set_hotplug", 00:16:54.713 "params": { 00:16:54.713 "period_us": 100000, 00:16:54.713 "enable": false 00:16:54.713 } 00:16:54.713 }, 00:16:54.713 { 00:16:54.713 "method": "bdev_enable_histogram", 00:16:54.713 "params": { 00:16:54.713 "name": "nvme0n1", 00:16:54.713 "enable": true 00:16:54.713 } 00:16:54.713 }, 00:16:54.713 { 00:16:54.713 "method": "bdev_wait_for_examine" 00:16:54.713 } 00:16:54.713 ] 00:16:54.713 }, 00:16:54.713 { 00:16:54.713 "subsystem": "nbd", 00:16:54.713 "config": [] 00:16:54.713 } 00:16:54.713 ] 00:16:54.713 }' 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1050441 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1050441 ']' 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1050441 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1050441 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1050441' 00:16:54.713 killing process with pid 1050441 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1050441 00:16:54.713 Received shutdown signal, test time was about 1.000000 seconds 00:16:54.713 00:16:54.713 Latency(us) 00:16:54.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.713 =================================================================================================================== 00:16:54.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.713 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1050441 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1050297 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1050297 ']' 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1050297 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1050297 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1050297' 00:16:54.971 killing process with pid 1050297 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1050297 00:16:54.971 [2024-05-15 03:11:25.992733] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:54.971 03:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1050297 00:16:55.229 03:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:55.229 03:11:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.229 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:55.229 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.229 03:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:55.229 "subsystems": [ 00:16:55.229 { 00:16:55.229 "subsystem": "keyring", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "keyring_file_add_key", 00:16:55.229 "params": { 00:16:55.229 "name": "key0", 00:16:55.229 "path": "/tmp/tmp.zs7qpKdYN5" 00:16:55.229 } 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "iobuf", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "iobuf_set_options", 00:16:55.229 "params": { 00:16:55.229 "small_pool_count": 8192, 00:16:55.229 "large_pool_count": 1024, 00:16:55.229 "small_bufsize": 8192, 00:16:55.229 "large_bufsize": 135168 00:16:55.229 } 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "sock", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "sock_impl_set_options", 00:16:55.229 "params": { 00:16:55.229 "impl_name": "posix", 00:16:55.229 "recv_buf_size": 2097152, 00:16:55.229 "send_buf_size": 2097152, 00:16:55.229 "enable_recv_pipe": true, 00:16:55.229 "enable_quickack": false, 00:16:55.229 "enable_placement_id": 0, 00:16:55.229 "enable_zerocopy_send_server": true, 00:16:55.229 "enable_zerocopy_send_client": false, 00:16:55.229 "zerocopy_threshold": 0, 00:16:55.229 "tls_version": 0, 00:16:55.229 "enable_ktls": false 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "sock_impl_set_options", 00:16:55.229 "params": { 00:16:55.229 "impl_name": "ssl", 00:16:55.229 "recv_buf_size": 4096, 00:16:55.229 "send_buf_size": 4096, 00:16:55.229 "enable_recv_pipe": true, 00:16:55.229 "enable_quickack": false, 00:16:55.229 "enable_placement_id": 0, 00:16:55.229 "enable_zerocopy_send_server": true, 00:16:55.229 "enable_zerocopy_send_client": false, 00:16:55.229 "zerocopy_threshold": 0, 00:16:55.229 "tls_version": 0, 00:16:55.229 "enable_ktls": false 00:16:55.229 } 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "vmd", 00:16:55.229 "config": [] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "accel", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "accel_set_options", 00:16:55.229 "params": { 00:16:55.229 "small_cache_size": 128, 00:16:55.229 "large_cache_size": 16, 00:16:55.229 "task_count": 2048, 00:16:55.229 "sequence_count": 2048, 00:16:55.229 "buf_count": 2048 00:16:55.229 } 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "bdev", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "bdev_set_options", 00:16:55.229 "params": { 00:16:55.229 "bdev_io_pool_size": 65535, 00:16:55.229 "bdev_io_cache_size": 256, 00:16:55.229 "bdev_auto_examine": true, 00:16:55.229 "iobuf_small_cache_size": 128, 00:16:55.229 "iobuf_large_cache_size": 16 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_raid_set_options", 00:16:55.229 "params": { 00:16:55.229 "process_window_size_kb": 1024 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_iscsi_set_options", 00:16:55.229 "params": { 00:16:55.229 "timeout_sec": 30 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_nvme_set_options", 00:16:55.229 "params": { 00:16:55.229 "action_on_timeout": "none", 00:16:55.229 "timeout_us": 0, 00:16:55.229 "timeout_admin_us": 0, 00:16:55.229 "keep_alive_timeout_ms": 10000, 00:16:55.229 "arbitration_burst": 0, 00:16:55.229 "low_priority_weight": 0, 00:16:55.229 "medium_priority_weight": 0, 00:16:55.229 "high_priority_weight": 0, 00:16:55.229 "nvme_adminq_poll_period_us": 10000, 00:16:55.229 "nvme_ioq_poll_period_us": 0, 00:16:55.229 "io_queue_requests": 0, 00:16:55.229 "delay_cmd_submit": true, 00:16:55.229 "transport_retry_count": 4, 00:16:55.229 "bdev_retry_count": 3, 00:16:55.229 "transport_ack_timeout": 0, 00:16:55.229 "ctrlr_loss_timeout_sec": 0, 00:16:55.229 "reconnect_delay_sec": 0, 00:16:55.229 "fast_io_fail_timeout_sec": 0, 00:16:55.229 "disable_auto_failback": false, 00:16:55.229 "generate_uuids": false, 00:16:55.229 "transport_tos": 0, 00:16:55.229 "nvme_error_stat": false, 00:16:55.229 "rdma_srq_size": 0, 00:16:55.229 "io_path_stat": false, 00:16:55.229 "allow_accel_sequence": false, 00:16:55.229 "rdma_max_cq_size": 0, 00:16:55.229 "rdma_cm_event_timeout_ms": 0, 00:16:55.229 "dhchap_digests": [ 00:16:55.229 "sha256", 00:16:55.229 "sha384", 00:16:55.229 "sha512" 00:16:55.229 ], 00:16:55.229 "dhchap_dhgroups": [ 00:16:55.229 "null", 00:16:55.229 "ffdhe2048", 00:16:55.229 "ffdhe3072", 00:16:55.229 "ffdhe4096", 00:16:55.229 "ffdhe6144", 00:16:55.229 "ffdhe8192" 00:16:55.229 ] 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_nvme_set_hotplug", 00:16:55.229 "params": { 00:16:55.229 "period_us": 100000, 00:16:55.229 "enable": false 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_malloc_create", 00:16:55.229 "params": { 00:16:55.229 "name": "malloc0", 00:16:55.229 "num_blocks": 8192, 00:16:55.229 "block_size": 4096, 00:16:55.229 "physical_block_size": 4096, 00:16:55.229 "uuid": "8b15749a-07bd-4cad-97e5-5e2c299653fc", 00:16:55.229 "optimal_io_boundary": 0 00:16:55.229 } 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "method": "bdev_wait_for_examine" 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "nbd", 00:16:55.229 "config": [] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "scheduler", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.229 "method": "framework_set_scheduler", 00:16:55.229 "params": { 00:16:55.229 "name": "static" 00:16:55.229 } 00:16:55.229 } 00:16:55.229 ] 00:16:55.229 }, 00:16:55.229 { 00:16:55.229 "subsystem": "nvmf", 00:16:55.229 "config": [ 00:16:55.229 { 00:16:55.230 "method": "nvmf_set_config", 00:16:55.230 "params": { 00:16:55.230 "discovery_filter": "match_any", 00:16:55.230 "admin_cmd_passthru": { 00:16:55.230 "identify_ctrlr": false 00:16:55.230 } 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_set_max_subsystems", 00:16:55.230 "params": { 00:16:55.230 "max_subsystems": 1024 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_set_crdt", 00:16:55.230 "params": { 00:16:55.230 "crdt1": 0, 00:16:55.230 "crdt2": 0, 00:16:55.230 "crdt3": 0 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_create_transport", 00:16:55.230 "params": { 00:16:55.230 "trtype": "TCP", 00:16:55.230 "max_queue_depth": 128, 00:16:55.230 "max_io_qpairs_per_ctrlr": 127, 00:16:55.230 "in_capsule_data_size": 4096, 00:16:55.230 "max_io_size": 131072, 00:16:55.230 "io_unit_size": 131072, 00:16:55.230 "max_aq_depth": 128, 00:16:55.230 "num_shared_buffers": 511, 00:16:55.230 "buf_cache_size": 4294967295, 00:16:55.230 "dif_insert_or_strip": false, 00:16:55.230 "zcopy": false, 00:16:55.230 "c2h_success": false, 00:16:55.230 "sock_priority": 0, 00:16:55.230 "abort_timeout_sec": 1, 00:16:55.230 "ack_timeout": 0, 00:16:55.230 "data_wr_pool_size": 0 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_create_subsystem", 00:16:55.230 "params": { 00:16:55.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.230 "allow_any_host": false, 00:16:55.230 "serial_number": "00000000000000000000", 00:16:55.230 "model_number": "SPDK bdev Controller", 00:16:55.230 "max_namespaces": 32, 00:16:55.230 "min_cntlid": 1, 00:16:55.230 "max_cntlid": 65519, 00:16:55.230 "ana_reporting": false 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_subsystem_add_host", 00:16:55.230 "params": { 00:16:55.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.230 "host": "nqn.2016-06.io.spdk:host1", 00:16:55.230 "psk": "key0" 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_subsystem_add_ns", 00:16:55.230 "params": { 00:16:55.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.230 "namespace": { 00:16:55.230 "nsid": 1, 00:16:55.230 "bdev_name": "malloc0", 00:16:55.230 "nguid": "8B15749A07BD4CAD97E55E2C299653FC", 00:16:55.230 "uuid": "8b15749a-07bd-4cad-97e5-5e2c299653fc", 00:16:55.230 "no_auto_visible": false 00:16:55.230 } 00:16:55.230 } 00:16:55.230 }, 00:16:55.230 { 00:16:55.230 "method": "nvmf_subsystem_add_listener", 00:16:55.230 "params": { 00:16:55.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.230 "listen_address": { 00:16:55.230 "trtype": "TCP", 00:16:55.230 "adrfam": "IPv4", 00:16:55.230 "traddr": "10.0.0.2", 00:16:55.230 "trsvcid": "4420" 00:16:55.230 }, 00:16:55.230 "secure_channel": true 00:16:55.230 } 00:16:55.230 } 00:16:55.230 ] 00:16:55.230 } 00:16:55.230 ] 00:16:55.230 }' 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1051023 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1051023 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1051023 ']' 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.230 03:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.230 [2024-05-15 03:11:26.261877] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:55.230 [2024-05-15 03:11:26.261924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.230 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.230 [2024-05-15 03:11:26.318424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.487 [2024-05-15 03:11:26.397231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.487 [2024-05-15 03:11:26.397262] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.487 [2024-05-15 03:11:26.397269] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.487 [2024-05-15 03:11:26.397275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.487 [2024-05-15 03:11:26.397282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.487 [2024-05-15 03:11:26.397342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.487 [2024-05-15 03:11:26.599897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.487 [2024-05-15 03:11:26.631911] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:55.487 [2024-05-15 03:11:26.631970] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.487 [2024-05-15 03:11:26.639823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1051120 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1051120 /var/tmp/bdevperf.sock 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1051120 ']' 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:56.052 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:56.052 "subsystems": [ 00:16:56.052 { 00:16:56.052 "subsystem": "keyring", 00:16:56.052 "config": [ 00:16:56.052 { 00:16:56.052 "method": "keyring_file_add_key", 00:16:56.052 "params": { 00:16:56.052 "name": "key0", 00:16:56.052 "path": "/tmp/tmp.zs7qpKdYN5" 00:16:56.052 } 00:16:56.052 } 00:16:56.052 ] 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "subsystem": "iobuf", 00:16:56.052 "config": [ 00:16:56.052 { 00:16:56.052 "method": "iobuf_set_options", 00:16:56.052 "params": { 00:16:56.052 "small_pool_count": 8192, 00:16:56.052 "large_pool_count": 1024, 00:16:56.052 "small_bufsize": 8192, 00:16:56.052 "large_bufsize": 135168 00:16:56.052 } 00:16:56.052 } 00:16:56.052 ] 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "subsystem": "sock", 00:16:56.052 "config": [ 00:16:56.052 { 00:16:56.052 "method": "sock_impl_set_options", 00:16:56.052 "params": { 00:16:56.052 "impl_name": "posix", 00:16:56.052 "recv_buf_size": 2097152, 00:16:56.052 "send_buf_size": 2097152, 00:16:56.052 "enable_recv_pipe": true, 00:16:56.052 "enable_quickack": false, 00:16:56.052 "enable_placement_id": 0, 00:16:56.052 "enable_zerocopy_send_server": true, 00:16:56.052 "enable_zerocopy_send_client": false, 00:16:56.052 "zerocopy_threshold": 0, 00:16:56.052 "tls_version": 0, 00:16:56.052 "enable_ktls": false 00:16:56.052 } 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "method": "sock_impl_set_options", 00:16:56.052 "params": { 00:16:56.052 "impl_name": "ssl", 00:16:56.052 "recv_buf_size": 4096, 00:16:56.052 "send_buf_size": 4096, 00:16:56.052 "enable_recv_pipe": true, 00:16:56.052 "enable_quickack": false, 00:16:56.052 "enable_placement_id": 0, 00:16:56.052 "enable_zerocopy_send_server": true, 00:16:56.052 "enable_zerocopy_send_client": false, 00:16:56.052 "zerocopy_threshold": 0, 00:16:56.052 "tls_version": 0, 00:16:56.052 "enable_ktls": false 00:16:56.052 } 00:16:56.052 } 00:16:56.052 ] 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "subsystem": "vmd", 00:16:56.052 "config": [] 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "subsystem": "accel", 00:16:56.052 "config": [ 00:16:56.052 { 00:16:56.052 "method": "accel_set_options", 00:16:56.052 "params": { 00:16:56.052 "small_cache_size": 128, 00:16:56.052 "large_cache_size": 16, 00:16:56.052 "task_count": 2048, 00:16:56.052 "sequence_count": 2048, 00:16:56.052 "buf_count": 2048 00:16:56.052 } 00:16:56.052 } 00:16:56.052 ] 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "subsystem": "bdev", 00:16:56.052 "config": [ 00:16:56.052 { 00:16:56.052 "method": "bdev_set_options", 00:16:56.052 "params": { 00:16:56.052 "bdev_io_pool_size": 65535, 00:16:56.052 "bdev_io_cache_size": 256, 00:16:56.052 "bdev_auto_examine": true, 00:16:56.052 "iobuf_small_cache_size": 128, 00:16:56.052 "iobuf_large_cache_size": 16 00:16:56.052 } 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "method": "bdev_raid_set_options", 00:16:56.052 "params": { 00:16:56.052 "process_window_size_kb": 1024 00:16:56.052 } 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "method": "bdev_iscsi_set_options", 00:16:56.052 "params": { 00:16:56.052 "timeout_sec": 30 00:16:56.052 } 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "method": "bdev_nvme_set_options", 00:16:56.052 "params": { 00:16:56.052 "action_on_timeout": "none", 00:16:56.052 "timeout_us": 0, 00:16:56.052 "timeout_admin_us": 0, 00:16:56.052 "keep_alive_timeout_ms": 10000, 00:16:56.052 "arbitration_burst": 0, 00:16:56.052 "low_priority_weight": 0, 00:16:56.052 "medium_priority_weight": 0, 00:16:56.052 "high_priority_weight": 0, 00:16:56.052 "nvme_adminq_poll_period_us": 10000, 00:16:56.052 "nvme_ioq_poll_period_us": 0, 00:16:56.052 "io_queue_requests": 512, 00:16:56.052 "delay_cmd_submit": true, 00:16:56.052 "transport_retry_count": 4, 00:16:56.052 "bdev_retry_count": 3, 00:16:56.052 "transport_ack_timeout": 0, 00:16:56.052 "ctrlr_loss_timeout_sec": 0, 00:16:56.052 "reconnect_delay_sec": 0, 00:16:56.052 "fast_io_fail_timeout_sec": 0, 00:16:56.052 "disable_auto_failback": false, 00:16:56.052 "generate_uuids": false, 00:16:56.052 "transport_tos": 0, 00:16:56.052 "nvme_error_stat": false, 00:16:56.052 "rdma_srq_size": 0, 00:16:56.052 "io_path_stat": false, 00:16:56.052 "allow_accel_sequence": false, 00:16:56.052 "rdma_max_cq_size": 0, 00:16:56.052 "rdma_cm_event_timeout_ms": 0, 00:16:56.052 "dhchap_digests": [ 00:16:56.052 "sha256", 00:16:56.052 "sha384", 00:16:56.052 "sha512" 00:16:56.052 ], 00:16:56.052 "dhchap_dhgroups": [ 00:16:56.052 "null", 00:16:56.052 "ffdhe2048", 00:16:56.052 "ffdhe3072", 00:16:56.052 "ffdhe4096", 00:16:56.052 "ffdhe6144", 00:16:56.052 "ffdhe8192" 00:16:56.052 ] 00:16:56.052 } 00:16:56.052 }, 00:16:56.052 { 00:16:56.052 "method": "bdev_nvme_attach_controller", 00:16:56.052 "params": { 00:16:56.052 "name": "nvme0", 00:16:56.052 "trtype": "TCP", 00:16:56.052 "adrfam": "IPv4", 00:16:56.052 "traddr": "10.0.0.2", 00:16:56.052 "trsvcid": "4420", 00:16:56.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.052 "prchk_reftag": false, 00:16:56.052 "prchk_guard": false, 00:16:56.052 "ctrlr_loss_timeout_sec": 0, 00:16:56.052 "reconnect_delay_sec": 0, 00:16:56.052 "fast_io_fail_timeout_sec": 0, 00:16:56.052 "psk": "key0", 00:16:56.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.052 "hdgst": false, 00:16:56.053 "ddgst": false 00:16:56.053 } 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "method": "bdev_nvme_set_hotplug", 00:16:56.053 "params": { 00:16:56.053 "period_us": 100000, 00:16:56.053 "enable": false 00:16:56.053 } 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "method": "bdev_enable_histogram", 00:16:56.053 "params": { 00:16:56.053 "name": "nvme0n1", 00:16:56.053 "enable": true 00:16:56.053 } 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "method": "bdev_wait_for_examine" 00:16:56.053 } 00:16:56.053 ] 00:16:56.053 }, 00:16:56.053 { 00:16:56.053 "subsystem": "nbd", 00:16:56.053 "config": [] 00:16:56.053 } 00:16:56.053 ] 00:16:56.053 }' 00:16:56.053 [2024-05-15 03:11:27.137737] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:16:56.053 [2024-05-15 03:11:27.137784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051120 ] 00:16:56.053 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.053 [2024-05-15 03:11:27.190698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.310 [2024-05-15 03:11:27.270490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.310 [2024-05-15 03:11:27.412627] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.873 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.874 03:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:56.874 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.874 03:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:57.130 03:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.130 03:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.131 Running I/O for 1 seconds... 00:16:58.062 00:16:58.062 Latency(us) 00:16:58.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.062 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:58.062 Verification LBA range: start 0x0 length 0x2000 00:16:58.062 nvme0n1 : 1.01 5716.52 22.33 0.00 0.00 22232.31 4872.46 23023.08 00:16:58.062 =================================================================================================================== 00:16:58.062 Total : 5716.52 22.33 0.00 0.00 22232.31 4872.46 23023.08 00:16:58.062 0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:58.320 nvmf_trace.0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1051120 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1051120 ']' 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1051120 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1051120 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1051120' 00:16:58.320 killing process with pid 1051120 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1051120 00:16:58.320 Received shutdown signal, test time was about 1.000000 seconds 00:16:58.320 00:16:58.320 Latency(us) 00:16:58.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.320 =================================================================================================================== 00:16:58.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.320 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1051120 00:16:58.577 03:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:58.577 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.578 rmmod nvme_tcp 00:16:58.578 rmmod nvme_fabrics 00:16:58.578 rmmod nvme_keyring 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1051023 ']' 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1051023 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1051023 ']' 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1051023 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1051023 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1051023' 00:16:58.578 killing process with pid 1051023 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1051023 00:16:58.578 [2024-05-15 03:11:29.664080] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:58.578 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1051023 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.835 03:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.363 03:11:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.363 03:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YetHNHDjPI /tmp/tmp.o64wS4KXGV /tmp/tmp.zs7qpKdYN5 00:17:01.363 00:17:01.363 real 1m23.702s 00:17:01.363 user 2m9.425s 00:17:01.363 sys 0m28.050s 00:17:01.363 03:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:01.363 03:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 ************************************ 00:17:01.363 END TEST nvmf_tls 00:17:01.363 ************************************ 00:17:01.363 03:11:31 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:01.363 03:11:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:01.363 03:11:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:01.363 03:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 ************************************ 00:17:01.363 START TEST nvmf_fips 00:17:01.363 ************************************ 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:01.363 * Looking for test storage... 00:17:01.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.363 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:01.364 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:01.364 Error setting digest 00:17:01.364 0002AA25477F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:01.364 0002AA25477F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.365 03:11:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:06.620 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:06.620 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:06.620 Found net devices under 0000:86:00.0: cvl_0_0 00:17:06.620 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:06.621 Found net devices under 0000:86:00.1: cvl_0_1 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:17:06.621 00:17:06.621 --- 10.0.0.2 ping statistics --- 00:17:06.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.621 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:06.621 00:17:06.621 --- 10.0.0.1 ping statistics --- 00:17:06.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.621 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1055066 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1055066 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1055066 ']' 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.621 03:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:06.878 [2024-05-15 03:11:37.835220] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:17:06.878 [2024-05-15 03:11:37.835263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.878 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.878 [2024-05-15 03:11:37.892104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.878 [2024-05-15 03:11:37.967059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.878 [2024-05-15 03:11:37.967095] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.878 [2024-05-15 03:11:37.967102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.878 [2024-05-15 03:11:37.967111] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.878 [2024-05-15 03:11:37.967116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.878 [2024-05-15 03:11:37.967132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.811 [2024-05-15 03:11:38.802756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.811 [2024-05-15 03:11:38.818750] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.811 [2024-05-15 03:11:38.818790] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.811 [2024-05-15 03:11:38.818927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.811 [2024-05-15 03:11:38.847021] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:07.811 malloc0 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1055310 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1055310 /var/tmp/bdevperf.sock 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1055310 ']' 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.811 03:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.811 [2024-05-15 03:11:38.927116] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:17:07.811 [2024-05-15 03:11:38.927165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055310 ] 00:17:07.811 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.069 [2024-05-15 03:11:38.976663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.069 [2024-05-15 03:11:39.049176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.632 03:11:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:08.632 03:11:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:17:08.632 03:11:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:08.889 [2024-05-15 03:11:39.871783] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.889 [2024-05-15 03:11:39.871865] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:08.889 TLSTESTn1 00:17:08.889 03:11:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:08.889 Running I/O for 10 seconds... 00:17:21.121 00:17:21.121 Latency(us) 00:17:21.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.121 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.121 Verification LBA range: start 0x0 length 0x2000 00:17:21.121 TLSTESTn1 : 10.01 5524.36 21.58 0.00 0.00 23133.31 5100.41 27354.16 00:17:21.121 =================================================================================================================== 00:17:21.121 Total : 5524.36 21.58 0.00 0.00 23133.31 5100.41 27354.16 00:17:21.121 0 00:17:21.121 03:11:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:21.121 03:11:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:21.121 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:21.122 nvmf_trace.0 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1055310 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1055310 ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1055310 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1055310 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1055310' 00:17:21.122 killing process with pid 1055310 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1055310 00:17:21.122 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.122 00:17:21.122 Latency(us) 00:17:21.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.122 =================================================================================================================== 00:17:21.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.122 [2024-05-15 03:11:50.229603] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1055310 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.122 rmmod nvme_tcp 00:17:21.122 rmmod nvme_fabrics 00:17:21.122 rmmod nvme_keyring 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1055066 ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1055066 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1055066 ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1055066 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1055066 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1055066' 00:17:21.122 killing process with pid 1055066 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1055066 00:17:21.122 [2024-05-15 03:11:50.534210] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:21.122 [2024-05-15 03:11:50.534247] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1055066 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.122 03:11:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.687 03:11:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.687 03:11:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:21.687 00:17:21.687 real 0m20.794s 00:17:21.687 user 0m22.730s 00:17:21.687 sys 0m8.856s 00:17:21.687 03:11:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:21.687 03:11:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:21.687 ************************************ 00:17:21.687 END TEST nvmf_fips 00:17:21.687 ************************************ 00:17:21.687 03:11:52 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:17:21.687 03:11:52 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:17:21.687 03:11:52 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:17:21.687 03:11:52 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:17:21.687 03:11:52 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.687 03:11:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:26.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:26.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:26.952 Found net devices under 0000:86:00.0: cvl_0_0 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:26.952 Found net devices under 0000:86:00.1: cvl_0_1 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:17:26.952 03:11:57 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:26.952 03:11:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:26.952 03:11:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.952 03:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.952 ************************************ 00:17:26.952 START TEST nvmf_perf_adq 00:17:26.952 ************************************ 00:17:26.952 03:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:26.952 * Looking for test storage... 00:17:26.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.952 03:11:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.953 03:11:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:32.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:32.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.205 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:32.206 Found net devices under 0000:86:00.0: cvl_0_0 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:32.206 Found net devices under 0000:86:00.1: cvl_0_1 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:17:32.206 03:12:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:17:33.580 03:12:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:17:35.483 03:12:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:40.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:40.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:40.749 Found net devices under 0000:86:00.0: cvl_0_0 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:40.749 Found net devices under 0000:86:00.1: cvl_0_1 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:17:40.749 00:17:40.749 --- 10.0.0.2 ping statistics --- 00:17:40.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.749 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:17:40.749 00:17:40.749 --- 10.0.0.1 ping statistics --- 00:17:40.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.749 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.749 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1065522 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1065522 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1065522 ']' 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.750 03:12:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:40.750 [2024-05-15 03:12:11.712488] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:17:40.750 [2024-05-15 03:12:11.712535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.750 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.750 [2024-05-15 03:12:11.771665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.750 [2024-05-15 03:12:11.853289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.750 [2024-05-15 03:12:11.853323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.750 [2024-05-15 03:12:11.853330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.750 [2024-05-15 03:12:11.853337] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.750 [2024-05-15 03:12:11.853342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.750 [2024-05-15 03:12:11.853598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.750 [2024-05-15 03:12:11.853616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.750 [2024-05-15 03:12:11.853701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.750 [2024-05-15 03:12:11.853702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 [2024-05-15 03:12:12.711842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 Malloc1 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.686 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:41.687 [2024-05-15 03:12:12.759198] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:41.687 [2024-05-15 03:12:12.759430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1065777 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:17:41.687 03:12:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:41.687 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:17:44.217 "tick_rate": 2300000000, 00:17:44.217 "poll_groups": [ 00:17:44.217 { 00:17:44.217 "name": "nvmf_tgt_poll_group_000", 00:17:44.217 "admin_qpairs": 1, 00:17:44.217 "io_qpairs": 1, 00:17:44.217 "current_admin_qpairs": 1, 00:17:44.217 "current_io_qpairs": 1, 00:17:44.217 "pending_bdev_io": 0, 00:17:44.217 "completed_nvme_io": 19281, 00:17:44.217 "transports": [ 00:17:44.217 { 00:17:44.217 "trtype": "TCP" 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }, 00:17:44.217 { 00:17:44.217 "name": "nvmf_tgt_poll_group_001", 00:17:44.217 "admin_qpairs": 0, 00:17:44.217 "io_qpairs": 1, 00:17:44.217 "current_admin_qpairs": 0, 00:17:44.217 "current_io_qpairs": 1, 00:17:44.217 "pending_bdev_io": 0, 00:17:44.217 "completed_nvme_io": 19585, 00:17:44.217 "transports": [ 00:17:44.217 { 00:17:44.217 "trtype": "TCP" 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }, 00:17:44.217 { 00:17:44.217 "name": "nvmf_tgt_poll_group_002", 00:17:44.217 "admin_qpairs": 0, 00:17:44.217 "io_qpairs": 1, 00:17:44.217 "current_admin_qpairs": 0, 00:17:44.217 "current_io_qpairs": 1, 00:17:44.217 "pending_bdev_io": 0, 00:17:44.217 "completed_nvme_io": 19524, 00:17:44.217 "transports": [ 00:17:44.217 { 00:17:44.217 "trtype": "TCP" 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }, 00:17:44.217 { 00:17:44.217 "name": "nvmf_tgt_poll_group_003", 00:17:44.217 "admin_qpairs": 0, 00:17:44.217 "io_qpairs": 1, 00:17:44.217 "current_admin_qpairs": 0, 00:17:44.217 "current_io_qpairs": 1, 00:17:44.217 "pending_bdev_io": 0, 00:17:44.217 "completed_nvme_io": 19218, 00:17:44.217 "transports": [ 00:17:44.217 { 00:17:44.217 "trtype": "TCP" 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }' 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:17:44.217 03:12:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1065777 00:17:52.416 Initializing NVMe Controllers 00:17:52.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:17:52.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:17:52.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:17:52.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:17:52.416 Initialization complete. Launching workers. 00:17:52.416 ======================================================== 00:17:52.416 Latency(us) 00:17:52.416 Device Information : IOPS MiB/s Average min max 00:17:52.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9910.40 38.71 6459.74 2074.38 10417.61 00:17:52.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10093.10 39.43 6342.62 2218.69 11100.80 00:17:52.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10039.60 39.22 6374.17 2495.81 10462.24 00:17:52.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9975.90 38.97 6415.04 1934.87 10462.85 00:17:52.416 ======================================================== 00:17:52.416 Total : 40019.00 156.32 6397.59 1934.87 11100.80 00:17:52.416 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.416 rmmod nvme_tcp 00:17:52.416 rmmod nvme_fabrics 00:17:52.416 rmmod nvme_keyring 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1065522 ']' 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1065522 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1065522 ']' 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1065522 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1065522 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1065522' 00:17:52.416 killing process with pid 1065522 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1065522 00:17:52.416 [2024-05-15 03:12:22.994836] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:52.416 03:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1065522 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.416 03:12:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.316 03:12:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.316 03:12:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:17:54.316 03:12:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:17:55.691 03:12:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:17:57.593 03:12:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.864 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:02.865 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:02.865 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:02.865 Found net devices under 0000:86:00.0: cvl_0_0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:02.865 Found net devices under 0000:86:00.1: cvl_0_1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:02.865 00:18:02.865 --- 10.0.0.2 ping statistics --- 00:18:02.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.865 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:18:02.865 00:18:02.865 --- 10.0.0.1 ping statistics --- 00:18:02.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.865 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:02.865 net.core.busy_poll = 1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:02.865 net.core.busy_read = 1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1069559 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1069559 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1069559 ']' 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:02.865 03:12:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:02.865 [2024-05-15 03:12:33.903901] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:02.865 [2024-05-15 03:12:33.903953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.865 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.865 [2024-05-15 03:12:33.962826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.123 [2024-05-15 03:12:34.039899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.123 [2024-05-15 03:12:34.039938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.123 [2024-05-15 03:12:34.039945] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.123 [2024-05-15 03:12:34.039950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.123 [2024-05-15 03:12:34.039955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.123 [2024-05-15 03:12:34.040023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.123 [2024-05-15 03:12:34.040262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.123 [2024-05-15 03:12:34.040325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.123 [2024-05-15 03:12:34.040327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.689 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.947 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.947 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.948 [2024-05-15 03:12:34.891214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.948 Malloc1 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:03.948 [2024-05-15 03:12:34.938896] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:03.948 [2024-05-15 03:12:34.939125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1069811 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:03.948 03:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:03.948 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:18:05.843 "tick_rate": 2300000000, 00:18:05.843 "poll_groups": [ 00:18:05.843 { 00:18:05.843 "name": "nvmf_tgt_poll_group_000", 00:18:05.843 "admin_qpairs": 1, 00:18:05.843 "io_qpairs": 4, 00:18:05.843 "current_admin_qpairs": 1, 00:18:05.843 "current_io_qpairs": 4, 00:18:05.843 "pending_bdev_io": 0, 00:18:05.843 "completed_nvme_io": 43671, 00:18:05.843 "transports": [ 00:18:05.843 { 00:18:05.843 "trtype": "TCP" 00:18:05.843 } 00:18:05.843 ] 00:18:05.843 }, 00:18:05.843 { 00:18:05.843 "name": "nvmf_tgt_poll_group_001", 00:18:05.843 "admin_qpairs": 0, 00:18:05.843 "io_qpairs": 0, 00:18:05.843 "current_admin_qpairs": 0, 00:18:05.843 "current_io_qpairs": 0, 00:18:05.843 "pending_bdev_io": 0, 00:18:05.843 "completed_nvme_io": 0, 00:18:05.843 "transports": [ 00:18:05.843 { 00:18:05.843 "trtype": "TCP" 00:18:05.843 } 00:18:05.843 ] 00:18:05.843 }, 00:18:05.843 { 00:18:05.843 "name": "nvmf_tgt_poll_group_002", 00:18:05.843 "admin_qpairs": 0, 00:18:05.843 "io_qpairs": 0, 00:18:05.843 "current_admin_qpairs": 0, 00:18:05.843 "current_io_qpairs": 0, 00:18:05.843 "pending_bdev_io": 0, 00:18:05.843 "completed_nvme_io": 0, 00:18:05.843 "transports": [ 00:18:05.843 { 00:18:05.843 "trtype": "TCP" 00:18:05.843 } 00:18:05.843 ] 00:18:05.843 }, 00:18:05.843 { 00:18:05.843 "name": "nvmf_tgt_poll_group_003", 00:18:05.843 "admin_qpairs": 0, 00:18:05.843 "io_qpairs": 0, 00:18:05.843 "current_admin_qpairs": 0, 00:18:05.843 "current_io_qpairs": 0, 00:18:05.843 "pending_bdev_io": 0, 00:18:05.843 "completed_nvme_io": 0, 00:18:05.843 "transports": [ 00:18:05.843 { 00:18:05.843 "trtype": "TCP" 00:18:05.843 } 00:18:05.843 ] 00:18:05.843 } 00:18:05.843 ] 00:18:05.843 }' 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:05.843 03:12:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:18:06.101 03:12:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:18:06.101 03:12:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:18:06.101 03:12:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1069811 00:18:14.200 Initializing NVMe Controllers 00:18:14.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:14.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:14.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:14.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:14.200 Initialization complete. Launching workers. 00:18:14.200 ======================================================== 00:18:14.200 Latency(us) 00:18:14.200 Device Information : IOPS MiB/s Average min max 00:18:14.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6184.30 24.16 10372.95 1263.74 58017.98 00:18:14.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5401.60 21.10 11888.70 1567.52 58209.95 00:18:14.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6050.90 23.64 10580.60 1424.77 59327.60 00:18:14.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5239.40 20.47 12257.39 1611.61 57685.51 00:18:14.201 ======================================================== 00:18:14.201 Total : 22876.19 89.36 11217.38 1263.74 59327.60 00:18:14.201 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.201 rmmod nvme_tcp 00:18:14.201 rmmod nvme_fabrics 00:18:14.201 rmmod nvme_keyring 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1069559 ']' 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1069559 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1069559 ']' 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1069559 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1069559 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1069559' 00:18:14.201 killing process with pid 1069559 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1069559 00:18:14.201 [2024-05-15 03:12:45.214719] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:14.201 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1069559 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.460 03:12:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.361 03:12:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.361 03:12:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:16.361 00:18:16.361 real 0m49.524s 00:18:16.361 user 2m49.631s 00:18:16.361 sys 0m8.801s 00:18:16.361 03:12:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:16.361 03:12:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:16.361 ************************************ 00:18:16.361 END TEST nvmf_perf_adq 00:18:16.361 ************************************ 00:18:16.620 03:12:47 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:16.620 03:12:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:16.620 03:12:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:16.620 03:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.620 ************************************ 00:18:16.620 START TEST nvmf_shutdown 00:18:16.620 ************************************ 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:16.620 * Looking for test storage... 00:18:16.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:16.620 ************************************ 00:18:16.620 START TEST nvmf_shutdown_tc1 00:18:16.620 ************************************ 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.620 03:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:21.889 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:21.889 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:21.889 Found net devices under 0000:86:00.0: cvl_0_0 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.889 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:21.890 Found net devices under 0000:86:00.1: cvl_0_1 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.890 03:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.890 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.890 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.890 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.890 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:22.148 00:18:22.148 --- 10.0.0.2 ping statistics --- 00:18:22.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.148 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:18:22.148 00:18:22.148 --- 10.0.0.1 ping statistics --- 00:18:22.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.148 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1075024 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1075024 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1075024 ']' 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.148 03:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.148 [2024-05-15 03:12:53.196172] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:22.148 [2024-05-15 03:12:53.196216] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.148 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.148 [2024-05-15 03:12:53.254400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.413 [2024-05-15 03:12:53.335855] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.413 [2024-05-15 03:12:53.335890] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.413 [2024-05-15 03:12:53.335897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.413 [2024-05-15 03:12:53.335903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.413 [2024-05-15 03:12:53.335909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.413 [2024-05-15 03:12:53.335958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.413 [2024-05-15 03:12:53.336042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.413 [2024-05-15 03:12:53.336523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.413 [2024-05-15 03:12:53.336524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 [2024-05-15 03:12:54.055343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.978 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 Malloc1 00:18:23.236 [2024-05-15 03:12:54.151123] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:23.236 [2024-05-15 03:12:54.151347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.236 Malloc2 00:18:23.236 Malloc3 00:18:23.236 Malloc4 00:18:23.236 Malloc5 00:18:23.236 Malloc6 00:18:23.236 Malloc7 00:18:23.494 Malloc8 00:18:23.494 Malloc9 00:18:23.494 Malloc10 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1075305 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1075305 /var/tmp/bdevperf.sock 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1075305 ']' 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:23.494 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 [2024-05-15 03:12:54.615114] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:23.495 [2024-05-15 03:12:54.615168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:23.495 { 00:18:23.495 "params": { 00:18:23.495 "name": "Nvme$subsystem", 00:18:23.495 "trtype": "$TEST_TRANSPORT", 00:18:23.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.495 "adrfam": "ipv4", 00:18:23.495 "trsvcid": "$NVMF_PORT", 00:18:23.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.495 "hdgst": ${hdgst:-false}, 00:18:23.495 "ddgst": ${ddgst:-false} 00:18:23.495 }, 00:18:23.495 "method": "bdev_nvme_attach_controller" 00:18:23.495 } 00:18:23.495 EOF 00:18:23.495 )") 00:18:23.495 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:23.495 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:23.496 03:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme1", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme2", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme3", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme4", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme5", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme6", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme7", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme8", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme9", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 },{ 00:18:23.496 "params": { 00:18:23.496 "name": "Nvme10", 00:18:23.496 "trtype": "tcp", 00:18:23.496 "traddr": "10.0.0.2", 00:18:23.496 "adrfam": "ipv4", 00:18:23.496 "trsvcid": "4420", 00:18:23.496 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:23.496 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:23.496 "hdgst": false, 00:18:23.496 "ddgst": false 00:18:23.496 }, 00:18:23.496 "method": "bdev_nvme_attach_controller" 00:18:23.496 }' 00:18:23.754 [2024-05-15 03:12:54.670814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.754 [2024-05-15 03:12:54.745069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1075305 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:25.126 03:12:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:18:26.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1075305 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1075024 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.060 "ddgst": ${ddgst:-false} 00:18:26.060 }, 00:18:26.060 "method": "bdev_nvme_attach_controller" 00:18:26.060 } 00:18:26.060 EOF 00:18:26.060 )") 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.060 03:12:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.060 "ddgst": ${ddgst:-false} 00:18:26.060 }, 00:18:26.060 "method": "bdev_nvme_attach_controller" 00:18:26.060 } 00:18:26.060 EOF 00:18:26.060 )") 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.060 "ddgst": ${ddgst:-false} 00:18:26.060 }, 00:18:26.060 "method": "bdev_nvme_attach_controller" 00:18:26.060 } 00:18:26.060 EOF 00:18:26.060 )") 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.060 "ddgst": ${ddgst:-false} 00:18:26.060 }, 00:18:26.060 "method": "bdev_nvme_attach_controller" 00:18:26.060 } 00:18:26.060 EOF 00:18:26.060 )") 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.060 "ddgst": ${ddgst:-false} 00:18:26.060 }, 00:18:26.060 "method": "bdev_nvme_attach_controller" 00:18:26.060 } 00:18:26.060 EOF 00:18:26.060 )") 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.060 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.060 { 00:18:26.060 "params": { 00:18:26.060 "name": "Nvme$subsystem", 00:18:26.060 "trtype": "$TEST_TRANSPORT", 00:18:26.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.060 "adrfam": "ipv4", 00:18:26.060 "trsvcid": "$NVMF_PORT", 00:18:26.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.060 "hdgst": ${hdgst:-false}, 00:18:26.061 "ddgst": ${ddgst:-false} 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 } 00:18:26.061 EOF 00:18:26.061 )") 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.061 { 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme$subsystem", 00:18:26.061 "trtype": "$TEST_TRANSPORT", 00:18:26.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "$NVMF_PORT", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.061 "hdgst": ${hdgst:-false}, 00:18:26.061 "ddgst": ${ddgst:-false} 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 } 00:18:26.061 EOF 00:18:26.061 )") 00:18:26.061 [2024-05-15 03:12:57.035015] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:26.061 [2024-05-15 03:12:57.035064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075766 ] 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.061 { 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme$subsystem", 00:18:26.061 "trtype": "$TEST_TRANSPORT", 00:18:26.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "$NVMF_PORT", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.061 "hdgst": ${hdgst:-false}, 00:18:26.061 "ddgst": ${ddgst:-false} 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 } 00:18:26.061 EOF 00:18:26.061 )") 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.061 { 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme$subsystem", 00:18:26.061 "trtype": "$TEST_TRANSPORT", 00:18:26.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "$NVMF_PORT", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.061 "hdgst": ${hdgst:-false}, 00:18:26.061 "ddgst": ${ddgst:-false} 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 } 00:18:26.061 EOF 00:18:26.061 )") 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.061 { 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme$subsystem", 00:18:26.061 "trtype": "$TEST_TRANSPORT", 00:18:26.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "$NVMF_PORT", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.061 "hdgst": ${hdgst:-false}, 00:18:26.061 "ddgst": ${ddgst:-false} 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 } 00:18:26.061 EOF 00:18:26.061 )") 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:26.061 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:26.061 03:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme1", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme2", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme3", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme4", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme5", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme6", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme7", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme8", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme9", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 },{ 00:18:26.061 "params": { 00:18:26.061 "name": "Nvme10", 00:18:26.061 "trtype": "tcp", 00:18:26.061 "traddr": "10.0.0.2", 00:18:26.061 "adrfam": "ipv4", 00:18:26.061 "trsvcid": "4420", 00:18:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:26.061 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:26.061 "hdgst": false, 00:18:26.061 "ddgst": false 00:18:26.061 }, 00:18:26.061 "method": "bdev_nvme_attach_controller" 00:18:26.061 }' 00:18:26.061 [2024-05-15 03:12:57.092621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.061 [2024-05-15 03:12:57.165965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.435 Running I/O for 1 seconds... 00:18:28.813 00:18:28.813 Latency(us) 00:18:28.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme1n1 : 1.05 244.40 15.28 0.00 0.00 259509.87 15614.66 218833.25 00:18:28.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme2n1 : 1.15 279.32 17.46 0.00 0.00 223994.21 18008.15 216097.84 00:18:28.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme3n1 : 1.13 282.78 17.67 0.00 0.00 217893.67 15728.64 217009.64 00:18:28.813 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme4n1 : 1.14 281.52 17.60 0.00 0.00 215869.53 12537.32 221568.67 00:18:28.813 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme5n1 : 1.13 312.69 19.54 0.00 0.00 185044.17 9573.95 210627.01 00:18:28.813 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme6n1 : 1.15 277.28 17.33 0.00 0.00 212958.52 17780.20 217921.45 00:18:28.813 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme7n1 : 1.14 280.08 17.51 0.00 0.00 207528.11 18236.10 213362.42 00:18:28.813 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme8n1 : 1.15 278.61 17.41 0.00 0.00 205452.87 17552.25 202420.76 00:18:28.813 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme9n1 : 1.16 276.56 17.29 0.00 0.00 204048.29 11853.47 249834.63 00:18:28.813 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:28.813 Verification LBA range: start 0x0 length 0x400 00:18:28.813 Nvme10n1 : 1.16 277.41 17.34 0.00 0.00 200228.89 1267.98 223392.28 00:18:28.813 =================================================================================================================== 00:18:28.813 Total : 2790.66 174.42 0.00 0.00 212025.94 1267.98 249834.63 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:18:28.813 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.814 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:18:28.814 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.814 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.814 rmmod nvme_tcp 00:18:28.814 rmmod nvme_fabrics 00:18:28.814 rmmod nvme_keyring 00:18:28.814 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1075024 ']' 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1075024 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1075024 ']' 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1075024 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:29.076 03:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1075024 00:18:29.076 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:29.077 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:29.077 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1075024' 00:18:29.077 killing process with pid 1075024 00:18:29.077 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1075024 00:18:29.077 [2024-05-15 03:13:00.027578] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:29.077 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1075024 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.335 03:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.866 00:18:31.866 real 0m14.776s 00:18:31.866 user 0m33.605s 00:18:31.866 sys 0m5.337s 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 ************************************ 00:18:31.866 END TEST nvmf_shutdown_tc1 00:18:31.866 ************************************ 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 ************************************ 00:18:31.866 START TEST nvmf_shutdown_tc2 00:18:31.866 ************************************ 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:31.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:31.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:31.866 Found net devices under 0000:86:00.0: cvl_0_0 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.866 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:31.867 Found net devices under 0000:86:00.1: cvl_0_1 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:18:31.867 00:18:31.867 --- 10.0.0.2 ping statistics --- 00:18:31.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.867 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:18:31.867 00:18:31.867 --- 10.0.0.1 ping statistics --- 00:18:31.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.867 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1076807 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1076807 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1076807 ']' 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.867 03:13:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:31.867 [2024-05-15 03:13:02.945024] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:31.867 [2024-05-15 03:13:02.945072] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.867 [2024-05-15 03:13:03.003223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.125 [2024-05-15 03:13:03.086085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.125 [2024-05-15 03:13:03.086120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.125 [2024-05-15 03:13:03.086127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.125 [2024-05-15 03:13:03.086133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.125 [2024-05-15 03:13:03.086138] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.125 [2024-05-15 03:13:03.086241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.125 [2024-05-15 03:13:03.086327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.125 [2024-05-15 03:13:03.086434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.125 [2024-05-15 03:13:03.086435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.691 [2024-05-15 03:13:03.798476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:32.691 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:32.949 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.949 03:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:32.949 Malloc1 00:18:32.949 [2024-05-15 03:13:03.889990] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:32.949 [2024-05-15 03:13:03.890227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.949 Malloc2 00:18:32.949 Malloc3 00:18:32.949 Malloc4 00:18:32.949 Malloc5 00:18:32.949 Malloc6 00:18:33.208 Malloc7 00:18:33.208 Malloc8 00:18:33.208 Malloc9 00:18:33.208 Malloc10 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1077094 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1077094 /var/tmp/bdevperf.sock 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1077094 ']' 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.208 [2024-05-15 03:13:04.362431] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:33.208 [2024-05-15 03:13:04.362484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077094 ] 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.208 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.208 { 00:18:33.208 "params": { 00:18:33.208 "name": "Nvme$subsystem", 00:18:33.208 "trtype": "$TEST_TRANSPORT", 00:18:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.208 "adrfam": "ipv4", 00:18:33.208 "trsvcid": "$NVMF_PORT", 00:18:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.208 "hdgst": ${hdgst:-false}, 00:18:33.208 "ddgst": ${ddgst:-false} 00:18:33.208 }, 00:18:33.208 "method": "bdev_nvme_attach_controller" 00:18:33.208 } 00:18:33.208 EOF 00:18:33.208 )") 00:18:33.466 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.466 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.466 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.466 { 00:18:33.466 "params": { 00:18:33.466 "name": "Nvme$subsystem", 00:18:33.466 "trtype": "$TEST_TRANSPORT", 00:18:33.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.466 "adrfam": "ipv4", 00:18:33.466 "trsvcid": "$NVMF_PORT", 00:18:33.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.466 "hdgst": ${hdgst:-false}, 00:18:33.466 "ddgst": ${ddgst:-false} 00:18:33.466 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 } 00:18:33.467 EOF 00:18:33.467 )") 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.467 { 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme$subsystem", 00:18:33.467 "trtype": "$TEST_TRANSPORT", 00:18:33.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "$NVMF_PORT", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.467 "hdgst": ${hdgst:-false}, 00:18:33.467 "ddgst": ${ddgst:-false} 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 } 00:18:33.467 EOF 00:18:33.467 )") 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:33.467 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:18:33.467 03:13:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme1", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme2", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme3", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme4", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme5", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme6", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme7", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme8", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme9", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 },{ 00:18:33.467 "params": { 00:18:33.467 "name": "Nvme10", 00:18:33.467 "trtype": "tcp", 00:18:33.467 "traddr": "10.0.0.2", 00:18:33.467 "adrfam": "ipv4", 00:18:33.467 "trsvcid": "4420", 00:18:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:33.467 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:33.467 "hdgst": false, 00:18:33.467 "ddgst": false 00:18:33.467 }, 00:18:33.467 "method": "bdev_nvme_attach_controller" 00:18:33.467 }' 00:18:33.467 [2024-05-15 03:13:04.418254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.467 [2024-05-15 03:13:04.491256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.376 Running I/O for 10 seconds... 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:35.634 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.634 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:35.634 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:35.634 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1077094 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1077094 ']' 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1077094 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077094 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077094' 00:18:35.892 killing process with pid 1077094 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1077094 00:18:35.892 03:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1077094 00:18:35.892 Received shutdown signal, test time was about 0.904886 seconds 00:18:35.892 00:18:35.892 Latency(us) 00:18:35.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.892 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme1n1 : 0.90 291.01 18.19 0.00 0.00 216678.80 4131.62 216097.84 00:18:35.892 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme2n1 : 0.89 286.75 17.92 0.00 0.00 216570.21 18008.15 217921.45 00:18:35.892 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme3n1 : 0.87 293.70 18.36 0.00 0.00 207489.00 13734.07 217921.45 00:18:35.892 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme4n1 : 0.90 285.70 17.86 0.00 0.00 209699.39 17210.32 217009.64 00:18:35.892 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme5n1 : 0.90 284.08 17.75 0.00 0.00 207039.00 18122.13 217009.64 00:18:35.892 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme6n1 : 0.86 222.76 13.92 0.00 0.00 257931.65 16070.57 232510.33 00:18:35.892 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme7n1 : 0.89 288.66 18.04 0.00 0.00 195593.57 17324.30 217921.45 00:18:35.892 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme8n1 : 0.88 297.60 18.60 0.00 0.00 184872.75 3789.69 217009.64 00:18:35.892 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme9n1 : 0.90 283.12 17.69 0.00 0.00 191392.06 16412.49 221568.67 00:18:35.892 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:35.892 Verification LBA range: start 0x0 length 0x400 00:18:35.892 Nvme10n1 : 0.87 224.62 14.04 0.00 0.00 234409.83 5214.39 237069.36 00:18:35.892 =================================================================================================================== 00:18:35.892 Total : 2757.98 172.37 0.00 0.00 210367.61 3789.69 237069.36 00:18:36.149 03:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1076807 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.081 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.081 rmmod nvme_tcp 00:18:37.081 rmmod nvme_fabrics 00:18:37.340 rmmod nvme_keyring 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1076807 ']' 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1076807 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1076807 ']' 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1076807 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1076807 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1076807' 00:18:37.340 killing process with pid 1076807 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1076807 00:18:37.340 [2024-05-15 03:13:08.315686] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:37.340 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1076807 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.599 03:13:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:40.133 00:18:40.133 real 0m8.214s 00:18:40.133 user 0m25.209s 00:18:40.133 sys 0m1.336s 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 ************************************ 00:18:40.133 END TEST nvmf_shutdown_tc2 00:18:40.133 ************************************ 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 ************************************ 00:18:40.133 START TEST nvmf_shutdown_tc3 00:18:40.133 ************************************ 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.133 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:40.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:40.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:40.134 Found net devices under 0000:86:00.0: cvl_0_0 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:40.134 Found net devices under 0000:86:00.1: cvl_0_1 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.134 03:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:18:40.134 00:18:40.134 --- 10.0.0.2 ping statistics --- 00:18:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.134 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:40.134 00:18:40.134 --- 10.0.0.1 ping statistics --- 00:18:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.134 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1078365 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1078365 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:40.134 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1078365 ']' 00:18:40.135 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.135 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:40.135 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.135 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:40.135 03:13:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.135 [2024-05-15 03:13:11.232316] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:40.135 [2024-05-15 03:13:11.232355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.135 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.135 [2024-05-15 03:13:11.287713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.393 [2024-05-15 03:13:11.366022] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.393 [2024-05-15 03:13:11.366061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.393 [2024-05-15 03:13:11.366068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.393 [2024-05-15 03:13:11.366077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.393 [2024-05-15 03:13:11.366082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.393 [2024-05-15 03:13:11.366125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.393 [2024-05-15 03:13:11.366208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.393 [2024-05-15 03:13:11.366317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.393 [2024-05-15 03:13:11.366318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.959 [2024-05-15 03:13:12.067274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:40.959 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:18:41.252 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:41.252 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.252 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.252 Malloc1 00:18:41.252 [2024-05-15 03:13:12.163067] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:41.252 [2024-05-15 03:13:12.163309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.252 Malloc2 00:18:41.252 Malloc3 00:18:41.252 Malloc4 00:18:41.252 Malloc5 00:18:41.252 Malloc6 00:18:41.520 Malloc7 00:18:41.520 Malloc8 00:18:41.520 Malloc9 00:18:41.520 Malloc10 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1078642 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1078642 /var/tmp/bdevperf.sock 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1078642 ']' 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.520 "method": "bdev_nvme_attach_controller" 00:18:41.520 } 00:18:41.520 EOF 00:18:41.520 )") 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.520 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.520 { 00:18:41.520 "params": { 00:18:41.520 "name": "Nvme$subsystem", 00:18:41.520 "trtype": "$TEST_TRANSPORT", 00:18:41.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.520 "adrfam": "ipv4", 00:18:41.520 "trsvcid": "$NVMF_PORT", 00:18:41.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.520 "hdgst": ${hdgst:-false}, 00:18:41.520 "ddgst": ${ddgst:-false} 00:18:41.520 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 } 00:18:41.521 EOF 00:18:41.521 )") 00:18:41.521 [2024-05-15 03:13:12.641670] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:41.521 [2024-05-15 03:13:12.641718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078642 ] 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.521 { 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme$subsystem", 00:18:41.521 "trtype": "$TEST_TRANSPORT", 00:18:41.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "$NVMF_PORT", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.521 "hdgst": ${hdgst:-false}, 00:18:41.521 "ddgst": ${ddgst:-false} 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 } 00:18:41.521 EOF 00:18:41.521 )") 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.521 { 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme$subsystem", 00:18:41.521 "trtype": "$TEST_TRANSPORT", 00:18:41.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "$NVMF_PORT", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.521 "hdgst": ${hdgst:-false}, 00:18:41.521 "ddgst": ${ddgst:-false} 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 } 00:18:41.521 EOF 00:18:41.521 )") 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.521 { 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme$subsystem", 00:18:41.521 "trtype": "$TEST_TRANSPORT", 00:18:41.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "$NVMF_PORT", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.521 "hdgst": ${hdgst:-false}, 00:18:41.521 "ddgst": ${ddgst:-false} 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 } 00:18:41.521 EOF 00:18:41.521 )") 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:18:41.521 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:18:41.521 03:13:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme1", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme2", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme3", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme4", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme5", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme6", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme7", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme8", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme9", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 },{ 00:18:41.521 "params": { 00:18:41.521 "name": "Nvme10", 00:18:41.521 "trtype": "tcp", 00:18:41.521 "traddr": "10.0.0.2", 00:18:41.521 "adrfam": "ipv4", 00:18:41.521 "trsvcid": "4420", 00:18:41.521 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:41.521 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:41.521 "hdgst": false, 00:18:41.521 "ddgst": false 00:18:41.521 }, 00:18:41.521 "method": "bdev_nvme_attach_controller" 00:18:41.521 }' 00:18:41.780 [2024-05-15 03:13:12.698291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.780 [2024-05-15 03:13:12.771549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.677 Running I/O for 10 seconds... 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:43.677 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:43.935 03:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=199 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 199 -ge 100 ']' 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1078365 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1078365 ']' 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1078365 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1078365 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1078365' 00:18:44.210 killing process with pid 1078365 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1078365 00:18:44.210 [2024-05-15 03:13:15.234393] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:44.210 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1078365 00:18:44.210 [2024-05-15 03:13:15.234946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.234977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.234985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.234992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.210 [2024-05-15 03:13:15.235131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.235360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17135a0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.211 [2024-05-15 03:13:15.238585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.238675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ee0 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.239999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714380 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.212 [2024-05-15 03:13:15.240910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.240999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.241259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714820 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.213 [2024-05-15 03:13:15.242239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.242454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972360 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.214 [2024-05-15 03:13:15.243575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.243738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972800 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.215 [2024-05-15 03:13:15.244824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.244876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972ca0 is same with the state(5) to be set 00:18:44.216 [2024-05-15 03:13:15.253308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.253982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.253992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.216 [2024-05-15 03:13:15.254151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.216 [2024-05-15 03:13:15.254163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.254805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.254846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:18:44.217 [2024-05-15 03:13:15.254910] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x278c140 was disconnected and freed. reset controller. 00:18:44.217 [2024-05-15 03:13:15.254993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.217 [2024-05-15 03:13:15.255247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.217 [2024-05-15 03:13:15.255259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.255978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.255989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.218 [2024-05-15 03:13:15.256203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.218 [2024-05-15 03:13:15.256215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-05-15 03:13:15.256488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.256917] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x278d5e0 was disconnected and freed. reset controller. 00:18:44.219 [2024-05-15 03:13:15.257007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5980 is same with the state(5) to be set 00:18:44.219 [2024-05-15 03:13:15.257144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ee730 is same with the state(5) to be set 00:18:44.219 [2024-05-15 03:13:15.257271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263aa00 is same with the state(5) to be set 00:18:44.219 [2024-05-15 03:13:15.257394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261ba30 is same with the state(5) to be set 00:18:44.219 [2024-05-15 03:13:15.257530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.219 [2024-05-15 03:13:15.257610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.219 [2024-05-15 03:13:15.257620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f08a0 is same with the state(5) to be set 00:18:44.219 [2024-05-15 03:13:15.257650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dd2f0 is same with the state(5) to be set 00:18:44.220 [2024-05-15 03:13:15.257771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27aee30 is same with the state(5) to be set 00:18:44.220 [2024-05-15 03:13:15.257884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.257967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.257976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b74c0 is same with the state(5) to be set 00:18:44.220 [2024-05-15 03:13:15.258008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263b4f0 is same with the state(5) to be set 00:18:44.220 [2024-05-15 03:13:15.258127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.220 [2024-05-15 03:13:15.258203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5610 is same with the state(5) to be set 00:18:44.220 [2024-05-15 03:13:15.258323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.220 [2024-05-15 03:13:15.258764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-05-15 03:13:15.258775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.258985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.258997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-05-15 03:13:15.259707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.221 [2024-05-15 03:13:15.259719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.259729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.259742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.259764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.259774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.259786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.259797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.259811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.259821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.259896] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2795a80 was disconnected and freed. reset controller. 00:18:44.222 [2024-05-15 03:13:15.263673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:44.222 [2024-05-15 03:13:15.263712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:44.222 [2024-05-15 03:13:15.263728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:44.222 [2024-05-15 03:13:15.263746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ee730 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.263763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5980 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.263776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27aee30 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.264753] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265084] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265142] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265196] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265247] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265298] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265349] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:18:44.222 [2024-05-15 03:13:15.265526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.265711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.265725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27aee30 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.265737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27aee30 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.265858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.265975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.265990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5980 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.266000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5980 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.266112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.266286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.266299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ee730 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.266310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ee730 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.266433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27aee30 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.266450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5980 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.266463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ee730 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.266539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.266551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.266562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:44.222 [2024-05-15 03:13:15.266580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.266589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.266598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:44.222 [2024-05-15 03:13:15.266612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.266622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.266631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:44.222 [2024-05-15 03:13:15.266675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.266686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.266694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.266981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263aa00 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261ba30 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f08a0 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dd2f0 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b74c0 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263b4f0 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.267106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5610 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.274787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:44.222 [2024-05-15 03:13:15.274859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:44.222 [2024-05-15 03:13:15.274874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:44.222 [2024-05-15 03:13:15.275175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.275358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.275371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ee730 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.275382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ee730 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.275658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.275820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.275834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5980 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.275845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5980 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.276020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.276201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.222 [2024-05-15 03:13:15.276214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27aee30 with addr=10.0.0.2, port=4420 00:18:44.222 [2024-05-15 03:13:15.276223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27aee30 is same with the state(5) to be set 00:18:44.222 [2024-05-15 03:13:15.276237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ee730 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.276286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5980 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.276300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27aee30 (9): Bad file descriptor 00:18:44.222 [2024-05-15 03:13:15.276311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.276322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.276332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:44.222 [2024-05-15 03:13:15.276377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.276389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.276398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.276407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:44.222 [2024-05-15 03:13:15.276421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:18:44.222 [2024-05-15 03:13:15.276430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:18:44.222 [2024-05-15 03:13:15.276439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:44.222 [2024-05-15 03:13:15.276486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.276497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.222 [2024-05-15 03:13:15.277118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.277134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.277153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.277164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.277177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.222 [2024-05-15 03:13:15.277187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.222 [2024-05-15 03:13:15.277200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.277980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.277993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.223 [2024-05-15 03:13:15.278003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.223 [2024-05-15 03:13:15.278016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.278619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.278630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27947a0 is same with the state(5) to be set 00:18:44.224 [2024-05-15 03:13:15.279770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.279976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.279988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.224 [2024-05-15 03:13:15.280118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.224 [2024-05-15 03:13:15.280128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.280986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.280999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.281009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.281021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.281033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.281046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.225 [2024-05-15 03:13:15.281057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.225 [2024-05-15 03:13:15.281069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.281261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.281272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272f490 is same with the state(5) to be set 00:18:44.226 [2024-05-15 03:13:15.282410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.282986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.282996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.226 [2024-05-15 03:13:15.283145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.226 [2024-05-15 03:13:15.283155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.283896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.283910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2730a20 is same with the state(5) to be set 00:18:44.227 [2024-05-15 03:13:15.285029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.227 [2024-05-15 03:13:15.285206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.227 [2024-05-15 03:13:15.285216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.285983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.285996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.228 [2024-05-15 03:13:15.286020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.228 [2024-05-15 03:13:15.286030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.286517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.286528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e7cf0 is same with the state(5) to be set 00:18:44.229 [2024-05-15 03:13:15.287644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.287988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.287998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.288011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.288022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.288034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.288045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.288056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.288067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.288080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.229 [2024-05-15 03:13:15.288090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.229 [2024-05-15 03:13:15.288103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.288982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.288992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.289006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.289017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.289029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.289040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.289053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.230 [2024-05-15 03:13:15.289064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.230 [2024-05-15 03:13:15.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.289088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.289101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.289111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.289124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.289134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.289147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.289168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e91f0 is same with the state(5) to be set 00:18:44.231 [2024-05-15 03:13:15.290290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.290984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.291008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.291018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.291030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.291040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.291053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.291063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.291075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.291088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.231 [2024-05-15 03:13:15.291100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.231 [2024-05-15 03:13:15.291111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.291777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.291791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ea6f0 is same with the state(5) to be set 00:18:44.232 [2024-05-15 03:13:15.292917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.292938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.292956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.292967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.292980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.292990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.293004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.293014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.293026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.293036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.232 [2024-05-15 03:13:15.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.232 [2024-05-15 03:13:15.293060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.293977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.293990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.294001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.233 [2024-05-15 03:13:15.294014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.233 [2024-05-15 03:13:15.294025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.234 [2024-05-15 03:13:15.294418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.234 [2024-05-15 03:13:15.294429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ebbf0 is same with the state(5) to be set 00:18:44.234 [2024-05-15 03:13:15.295812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:44.234 [2024-05-15 03:13:15.295838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:44.234 [2024-05-15 03:13:15.295853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:18:44.234 [2024-05-15 03:13:15.295867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:18:44.234 [2024-05-15 03:13:15.295960] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.234 [2024-05-15 03:13:15.295977] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.234 [2024-05-15 03:13:15.295992] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.234 [2024-05-15 03:13:15.296389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:18:44.234 [2024-05-15 03:13:15.296408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:44.234 task offset: 24960 on job bdev=Nvme9n1 fails 00:18:44.234 00:18:44.234 Latency(us) 00:18:44.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.234 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme1n1 ended in about 0.90 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme1n1 : 0.90 216.84 13.55 70.80 0.00 220247.21 12366.36 210627.01 00:18:44.234 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme2n1 ended in about 0.89 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme2n1 : 0.89 216.32 13.52 72.11 0.00 215698.48 9744.92 240716.58 00:18:44.234 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme3n1 ended in about 0.91 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme3n1 : 0.91 211.79 13.24 70.60 0.00 216508.55 24846.69 211538.81 00:18:44.234 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme4n1 ended in about 0.91 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme4n1 : 0.91 211.18 13.20 70.39 0.00 213162.30 13449.13 218833.25 00:18:44.234 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme5n1 ended in about 0.91 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme5n1 : 0.91 210.58 13.16 70.19 0.00 209889.50 19375.86 214274.23 00:18:44.234 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme6n1 ended in about 0.91 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme6n1 : 0.91 139.98 8.75 69.99 0.00 275569.38 19261.89 244363.80 00:18:44.234 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme7n1 ended in about 0.92 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme7n1 : 0.92 209.37 13.09 69.79 0.00 203323.21 16298.52 216097.84 00:18:44.234 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme8n1 ended in about 0.92 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme8n1 : 0.92 208.77 13.05 69.59 0.00 200043.41 14531.90 216097.84 00:18:44.234 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme9n1 ended in about 0.89 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme9n1 : 0.89 216.82 13.55 72.27 0.00 187549.83 7237.45 225215.89 00:18:44.234 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.234 Job: Nvme10n1 ended in about 0.89 seconds with error 00:18:44.234 Verification LBA range: start 0x0 length 0x400 00:18:44.234 Nvme10n1 : 0.89 216.58 13.54 72.19 0.00 183866.32 15386.71 218833.25 00:18:44.234 =================================================================================================================== 00:18:44.234 Total : 2058.23 128.64 707.93 0.00 210985.70 7237.45 244363.80 00:18:44.234 [2024-05-15 03:13:15.321692] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:44.234 [2024-05-15 03:13:15.321742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:44.234 [2024-05-15 03:13:15.322141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.322375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.322390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f08a0 with addr=10.0.0.2, port=4420 00:18:44.234 [2024-05-15 03:13:15.322404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f08a0 is same with the state(5) to be set 00:18:44.234 [2024-05-15 03:13:15.322586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.322705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.322718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x261ba30 with addr=10.0.0.2, port=4420 00:18:44.234 [2024-05-15 03:13:15.322736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261ba30 is same with the state(5) to be set 00:18:44.234 [2024-05-15 03:13:15.322864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.323042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.323056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21dd2f0 with addr=10.0.0.2, port=4420 00:18:44.234 [2024-05-15 03:13:15.323066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dd2f0 is same with the state(5) to be set 00:18:44.234 [2024-05-15 03:13:15.323183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.234 [2024-05-15 03:13:15.323309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.323321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x263aa00 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.323332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263aa00 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.324940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:44.235 [2024-05-15 03:13:15.324965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:44.235 [2024-05-15 03:13:15.325193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.325308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.325321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x263b4f0 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.325333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263b4f0 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.325500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.325675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.325688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f5610 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.325700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5610 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.325797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27b74c0 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.326066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b74c0 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.326084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f08a0 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261ba30 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dd2f0 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263aa00 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326165] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.235 [2024-05-15 03:13:15.326185] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.235 [2024-05-15 03:13:15.326198] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.235 [2024-05-15 03:13:15.326214] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.235 [2024-05-15 03:13:15.326232] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:44.235 [2024-05-15 03:13:15.326313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:44.235 [2024-05-15 03:13:15.326454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ee730 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.326602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ee730 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.326710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.326827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27aee30 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.326838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27aee30 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.326851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263b4f0 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5610 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b74c0 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.326891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.326901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.326913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.235 [2024-05-15 03:13:15.326930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.326939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.326949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:44.235 [2024-05-15 03:13:15.326964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.326975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.326984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:44.235 [2024-05-15 03:13:15.326997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.327350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.235 [2024-05-15 03:13:15.327369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a5980 with addr=10.0.0.2, port=4420 00:18:44.235 [2024-05-15 03:13:15.327380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5980 is same with the state(5) to be set 00:18:44.235 [2024-05-15 03:13:15.327394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ee730 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.327406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27aee30 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.327418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a5980 (9): Bad file descriptor 00:18:44.235 [2024-05-15 03:13:15.327579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.235 [2024-05-15 03:13:15.327679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:18:44.235 [2024-05-15 03:13:15.327688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:18:44.235 [2024-05-15 03:13:15.327698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:44.235 [2024-05-15 03:13:15.327725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.802 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:18:44.802 03:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1078642 00:18:45.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1078642) - No such process 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.739 rmmod nvme_tcp 00:18:45.739 rmmod nvme_fabrics 00:18:45.739 rmmod nvme_keyring 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.739 03:13:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.271 00:18:48.271 real 0m7.947s 00:18:48.271 user 0m19.922s 00:18:48.271 sys 0m1.256s 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 ************************************ 00:18:48.271 END TEST nvmf_shutdown_tc3 00:18:48.271 ************************************ 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:18:48.271 00:18:48.271 real 0m31.280s 00:18:48.271 user 1m18.869s 00:18:48.271 sys 0m8.154s 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.271 03:13:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 ************************************ 00:18:48.271 END TEST nvmf_shutdown 00:18:48.271 ************************************ 00:18:48.271 03:13:18 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 03:13:18 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 03:13:18 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:18:48.271 03:13:18 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.271 03:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 ************************************ 00:18:48.271 START TEST nvmf_multicontroller 00:18:48.271 ************************************ 00:18:48.271 03:13:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:48.271 * Looking for test storage... 00:18:48.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.271 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.272 03:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.540 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:53.541 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:53.541 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:53.541 Found net devices under 0000:86:00.0: cvl_0_0 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:53.541 Found net devices under 0000:86:00.1: cvl_0_1 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:18:53.541 00:18:53.541 --- 10.0.0.2 ping statistics --- 00:18:53.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.541 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:18:53.541 00:18:53.541 --- 10.0.0.1 ping statistics --- 00:18:53.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.541 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:53.541 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1082701 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1082701 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1082701 ']' 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:53.542 03:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:53.542 [2024-05-15 03:13:24.574769] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:53.542 [2024-05-15 03:13:24.574809] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.542 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.542 [2024-05-15 03:13:24.631676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:53.800 [2024-05-15 03:13:24.713725] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.801 [2024-05-15 03:13:24.713760] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.801 [2024-05-15 03:13:24.713767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.801 [2024-05-15 03:13:24.713773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.801 [2024-05-15 03:13:24.713778] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.801 [2024-05-15 03:13:24.713878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.801 [2024-05-15 03:13:24.713899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.801 [2024-05-15 03:13:24.713900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 [2024-05-15 03:13:25.418476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 Malloc0 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 [2024-05-15 03:13:25.480426] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.367 [2024-05-15 03:13:25.480662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 [2024-05-15 03:13:25.488542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 Malloc1 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1082945 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:18:54.625 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1082945 /var/tmp/bdevperf.sock 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1082945 ']' 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.626 03:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 NVMe0n1 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.560 1 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 request: 00:18:55.560 { 00:18:55.560 "name": "NVMe0", 00:18:55.560 "trtype": "tcp", 00:18:55.560 "traddr": "10.0.0.2", 00:18:55.560 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:18:55.560 "hostaddr": "10.0.0.2", 00:18:55.560 "hostsvcid": "60000", 00:18:55.560 "adrfam": "ipv4", 00:18:55.560 "trsvcid": "4420", 00:18:55.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.560 "method": "bdev_nvme_attach_controller", 00:18:55.560 "req_id": 1 00:18:55.560 } 00:18:55.560 Got JSON-RPC error response 00:18:55.560 response: 00:18:55.560 { 00:18:55.560 "code": -114, 00:18:55.560 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:55.560 } 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:55.560 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.561 request: 00:18:55.561 { 00:18:55.561 "name": "NVMe0", 00:18:55.561 "trtype": "tcp", 00:18:55.561 "traddr": "10.0.0.2", 00:18:55.561 "hostaddr": "10.0.0.2", 00:18:55.561 "hostsvcid": "60000", 00:18:55.561 "adrfam": "ipv4", 00:18:55.561 "trsvcid": "4420", 00:18:55.561 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:55.561 "method": "bdev_nvme_attach_controller", 00:18:55.561 "req_id": 1 00:18:55.561 } 00:18:55.561 Got JSON-RPC error response 00:18:55.561 response: 00:18:55.561 { 00:18:55.561 "code": -114, 00:18:55.561 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:55.561 } 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.561 request: 00:18:55.561 { 00:18:55.561 "name": "NVMe0", 00:18:55.561 "trtype": "tcp", 00:18:55.561 "traddr": "10.0.0.2", 00:18:55.561 "hostaddr": "10.0.0.2", 00:18:55.561 "hostsvcid": "60000", 00:18:55.561 "adrfam": "ipv4", 00:18:55.561 "trsvcid": "4420", 00:18:55.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.561 "multipath": "disable", 00:18:55.561 "method": "bdev_nvme_attach_controller", 00:18:55.561 "req_id": 1 00:18:55.561 } 00:18:55.561 Got JSON-RPC error response 00:18:55.561 response: 00:18:55.561 { 00:18:55.561 "code": -114, 00:18:55.561 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:18:55.561 } 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.561 request: 00:18:55.561 { 00:18:55.561 "name": "NVMe0", 00:18:55.561 "trtype": "tcp", 00:18:55.561 "traddr": "10.0.0.2", 00:18:55.561 "hostaddr": "10.0.0.2", 00:18:55.561 "hostsvcid": "60000", 00:18:55.561 "adrfam": "ipv4", 00:18:55.561 "trsvcid": "4420", 00:18:55.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.561 "multipath": "failover", 00:18:55.561 "method": "bdev_nvme_attach_controller", 00:18:55.561 "req_id": 1 00:18:55.561 } 00:18:55.561 Got JSON-RPC error response 00:18:55.561 response: 00:18:55.561 { 00:18:55.561 "code": -114, 00:18:55.561 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:18:55.561 } 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.561 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.819 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.819 03:13:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:56.077 00:18:56.077 03:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.077 03:13:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:18:56.078 03:13:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.011 0 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1082945 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1082945 ']' 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1082945 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1082945 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1082945' 00:18:57.270 killing process with pid 1082945 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1082945 00:18:57.270 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1082945 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:18:57.529 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:18:57.529 [2024-05-15 03:13:25.591012] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:18:57.529 [2024-05-15 03:13:25.591054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082945 ] 00:18:57.529 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.529 [2024-05-15 03:13:25.644085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.529 [2024-05-15 03:13:25.717122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.529 [2024-05-15 03:13:27.046859] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 7f3ab28a-08ef-424c-8dd5-123a6c93ea0e already exists 00:18:57.529 [2024-05-15 03:13:27.046891] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:7f3ab28a-08ef-424c-8dd5-123a6c93ea0e alias for bdev NVMe1n1 00:18:57.529 [2024-05-15 03:13:27.046900] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:18:57.529 Running I/O for 1 seconds... 00:18:57.529 00:18:57.529 Latency(us) 00:18:57.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.529 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:18:57.529 NVMe0n1 : 1.01 23169.42 90.51 0.00 0.00 5506.10 5185.89 10656.72 00:18:57.529 =================================================================================================================== 00:18:57.529 Total : 23169.42 90.51 0.00 0.00 5506.10 5185.89 10656.72 00:18:57.529 Received shutdown signal, test time was about 1.000000 seconds 00:18:57.529 00:18:57.529 Latency(us) 00:18:57.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.529 =================================================================================================================== 00:18:57.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.529 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:18:57.529 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.530 rmmod nvme_tcp 00:18:57.530 rmmod nvme_fabrics 00:18:57.530 rmmod nvme_keyring 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1082701 ']' 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1082701 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1082701 ']' 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1082701 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1082701 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1082701' 00:18:57.530 killing process with pid 1082701 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1082701 00:18:57.530 [2024-05-15 03:13:28.602233] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:57.530 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1082701 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.788 03:13:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.321 03:13:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.321 00:19:00.321 real 0m11.945s 00:19:00.321 user 0m17.009s 00:19:00.321 sys 0m4.836s 00:19:00.321 03:13:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.321 03:13:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:00.321 ************************************ 00:19:00.321 END TEST nvmf_multicontroller 00:19:00.321 ************************************ 00:19:00.321 03:13:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:00.321 03:13:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.321 03:13:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.321 03:13:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.321 ************************************ 00:19:00.321 START TEST nvmf_aer 00:19:00.321 ************************************ 00:19:00.321 03:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:00.321 * Looking for test storage... 00:19:00.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.321 03:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.659 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.660 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.660 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:19:05.660 00:19:05.660 --- 10.0.0.2 ping statistics --- 00:19:05.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.660 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:19:05.660 00:19:05.660 --- 10.0.0.1 ping statistics --- 00:19:05.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.660 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1086941 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1086941 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1086941 ']' 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:05.660 03:13:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:05.660 [2024-05-15 03:13:36.586872] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:05.660 [2024-05-15 03:13:36.586919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.660 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.660 [2024-05-15 03:13:36.644843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.660 [2024-05-15 03:13:36.725637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.660 [2024-05-15 03:13:36.725673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.660 [2024-05-15 03:13:36.725680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.660 [2024-05-15 03:13:36.725686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.660 [2024-05-15 03:13:36.725691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.660 [2024-05-15 03:13:36.725744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.660 [2024-05-15 03:13:36.725840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.660 [2024-05-15 03:13:36.725860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.660 [2024-05-15 03:13:36.725861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.597 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.597 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:19:06.597 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.597 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.597 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 [2024-05-15 03:13:37.441404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 Malloc0 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 [2024-05-15 03:13:37.492868] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:06.598 [2024-05-15 03:13:37.493087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 [ 00:19:06.598 { 00:19:06.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:06.598 "subtype": "Discovery", 00:19:06.598 "listen_addresses": [], 00:19:06.598 "allow_any_host": true, 00:19:06.598 "hosts": [] 00:19:06.598 }, 00:19:06.598 { 00:19:06.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.598 "subtype": "NVMe", 00:19:06.598 "listen_addresses": [ 00:19:06.598 { 00:19:06.598 "trtype": "TCP", 00:19:06.598 "adrfam": "IPv4", 00:19:06.598 "traddr": "10.0.0.2", 00:19:06.598 "trsvcid": "4420" 00:19:06.598 } 00:19:06.598 ], 00:19:06.598 "allow_any_host": true, 00:19:06.598 "hosts": [], 00:19:06.598 "serial_number": "SPDK00000000000001", 00:19:06.598 "model_number": "SPDK bdev Controller", 00:19:06.598 "max_namespaces": 2, 00:19:06.598 "min_cntlid": 1, 00:19:06.598 "max_cntlid": 65519, 00:19:06.598 "namespaces": [ 00:19:06.598 { 00:19:06.598 "nsid": 1, 00:19:06.598 "bdev_name": "Malloc0", 00:19:06.598 "name": "Malloc0", 00:19:06.598 "nguid": "22E9549F58754A758EF3EC0A6D224A24", 00:19:06.598 "uuid": "22e9549f-5875-4a75-8ef3-ec0a6d224a24" 00:19:06.598 } 00:19:06.598 ] 00:19:06.598 } 00:19:06.598 ] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1087116 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:06.598 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.598 Malloc1 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.598 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.857 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.857 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:06.857 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.857 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.857 Asynchronous Event Request test 00:19:06.857 Attaching to 10.0.0.2 00:19:06.857 Attached to 10.0.0.2 00:19:06.857 Registering asynchronous event callbacks... 00:19:06.857 Starting namespace attribute notice tests for all controllers... 00:19:06.857 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:06.857 aer_cb - Changed Namespace 00:19:06.857 Cleaning up... 00:19:06.857 [ 00:19:06.857 { 00:19:06.857 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:06.857 "subtype": "Discovery", 00:19:06.857 "listen_addresses": [], 00:19:06.857 "allow_any_host": true, 00:19:06.857 "hosts": [] 00:19:06.857 }, 00:19:06.857 { 00:19:06.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.858 "subtype": "NVMe", 00:19:06.858 "listen_addresses": [ 00:19:06.858 { 00:19:06.858 "trtype": "TCP", 00:19:06.858 "adrfam": "IPv4", 00:19:06.858 "traddr": "10.0.0.2", 00:19:06.858 "trsvcid": "4420" 00:19:06.858 } 00:19:06.858 ], 00:19:06.858 "allow_any_host": true, 00:19:06.858 "hosts": [], 00:19:06.858 "serial_number": "SPDK00000000000001", 00:19:06.858 "model_number": "SPDK bdev Controller", 00:19:06.858 "max_namespaces": 2, 00:19:06.858 "min_cntlid": 1, 00:19:06.858 "max_cntlid": 65519, 00:19:06.858 "namespaces": [ 00:19:06.858 { 00:19:06.858 "nsid": 1, 00:19:06.858 "bdev_name": "Malloc0", 00:19:06.858 "name": "Malloc0", 00:19:06.858 "nguid": "22E9549F58754A758EF3EC0A6D224A24", 00:19:06.858 "uuid": "22e9549f-5875-4a75-8ef3-ec0a6d224a24" 00:19:06.858 }, 00:19:06.858 { 00:19:06.858 "nsid": 2, 00:19:06.858 "bdev_name": "Malloc1", 00:19:06.858 "name": "Malloc1", 00:19:06.858 "nguid": "BF3F0E3EFC4B4C099DA5D31A9000A9F1", 00:19:06.858 "uuid": "bf3f0e3e-fc4b-4c09-9da5-d31a9000a9f1" 00:19:06.858 } 00:19:06.858 ] 00:19:06.858 } 00:19:06.858 ] 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1087116 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.858 rmmod nvme_tcp 00:19:06.858 rmmod nvme_fabrics 00:19:06.858 rmmod nvme_keyring 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1086941 ']' 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1086941 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1086941 ']' 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1086941 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1086941 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1086941' 00:19:06.858 killing process with pid 1086941 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1086941 00:19:06.858 [2024-05-15 03:13:37.945627] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:06.858 03:13:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1086941 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.117 03:13:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.653 03:13:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.653 00:19:09.653 real 0m9.220s 00:19:09.653 user 0m7.221s 00:19:09.653 sys 0m4.488s 00:19:09.653 03:13:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.653 03:13:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:09.653 ************************************ 00:19:09.653 END TEST nvmf_aer 00:19:09.653 ************************************ 00:19:09.653 03:13:40 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:09.653 03:13:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:09.653 03:13:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.653 03:13:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.653 ************************************ 00:19:09.653 START TEST nvmf_async_init 00:19:09.653 ************************************ 00:19:09.653 03:13:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:09.653 * Looking for test storage... 00:19:09.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=54ec26bc96fa4da2b125dba41f0ab18e 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.654 03:13:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:14.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:14.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:14.930 Found net devices under 0000:86:00.0: cvl_0_0 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:14.930 Found net devices under 0000:86:00.1: cvl_0_1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:19:14.930 00:19:14.930 --- 10.0.0.2 ping statistics --- 00:19:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.930 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:19:14.930 00:19:14.930 --- 10.0.0.1 ping statistics --- 00:19:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.930 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:14.930 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1090518 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1090518 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1090518 ']' 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.931 03:13:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:14.931 [2024-05-15 03:13:45.820592] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:14.931 [2024-05-15 03:13:45.820633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.931 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.931 [2024-05-15 03:13:45.878495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.931 [2024-05-15 03:13:45.956926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.931 [2024-05-15 03:13:45.956962] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.931 [2024-05-15 03:13:45.956969] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.931 [2024-05-15 03:13:45.956974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.931 [2024-05-15 03:13:45.956979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.931 [2024-05-15 03:13:45.957002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.498 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:15.498 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:19:15.498 03:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.498 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.498 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 [2024-05-15 03:13:46.668181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 null0 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 54ec26bc96fa4da2b125dba41f0ab18e 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.756 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:15.757 [2024-05-15 03:13:46.708242] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:15.757 [2024-05-15 03:13:46.708404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.757 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 nvme0n1 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 [ 00:19:16.015 { 00:19:16.015 "name": "nvme0n1", 00:19:16.015 "aliases": [ 00:19:16.015 "54ec26bc-96fa-4da2-b125-dba41f0ab18e" 00:19:16.015 ], 00:19:16.015 "product_name": "NVMe disk", 00:19:16.015 "block_size": 512, 00:19:16.015 "num_blocks": 2097152, 00:19:16.015 "uuid": "54ec26bc-96fa-4da2-b125-dba41f0ab18e", 00:19:16.015 "assigned_rate_limits": { 00:19:16.015 "rw_ios_per_sec": 0, 00:19:16.015 "rw_mbytes_per_sec": 0, 00:19:16.015 "r_mbytes_per_sec": 0, 00:19:16.015 "w_mbytes_per_sec": 0 00:19:16.015 }, 00:19:16.015 "claimed": false, 00:19:16.015 "zoned": false, 00:19:16.015 "supported_io_types": { 00:19:16.015 "read": true, 00:19:16.015 "write": true, 00:19:16.015 "unmap": false, 00:19:16.015 "write_zeroes": true, 00:19:16.015 "flush": true, 00:19:16.015 "reset": true, 00:19:16.015 "compare": true, 00:19:16.015 "compare_and_write": true, 00:19:16.015 "abort": true, 00:19:16.015 "nvme_admin": true, 00:19:16.015 "nvme_io": true 00:19:16.015 }, 00:19:16.015 "memory_domains": [ 00:19:16.015 { 00:19:16.015 "dma_device_id": "system", 00:19:16.015 "dma_device_type": 1 00:19:16.015 } 00:19:16.015 ], 00:19:16.015 "driver_specific": { 00:19:16.015 "nvme": [ 00:19:16.015 { 00:19:16.015 "trid": { 00:19:16.015 "trtype": "TCP", 00:19:16.015 "adrfam": "IPv4", 00:19:16.015 "traddr": "10.0.0.2", 00:19:16.015 "trsvcid": "4420", 00:19:16.015 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:16.015 }, 00:19:16.015 "ctrlr_data": { 00:19:16.015 "cntlid": 1, 00:19:16.015 "vendor_id": "0x8086", 00:19:16.015 "model_number": "SPDK bdev Controller", 00:19:16.015 "serial_number": "00000000000000000000", 00:19:16.015 "firmware_revision": "24.05", 00:19:16.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.015 "oacs": { 00:19:16.015 "security": 0, 00:19:16.015 "format": 0, 00:19:16.015 "firmware": 0, 00:19:16.015 "ns_manage": 0 00:19:16.015 }, 00:19:16.015 "multi_ctrlr": true, 00:19:16.015 "ana_reporting": false 00:19:16.015 }, 00:19:16.015 "vs": { 00:19:16.015 "nvme_version": "1.3" 00:19:16.015 }, 00:19:16.015 "ns_data": { 00:19:16.015 "id": 1, 00:19:16.015 "can_share": true 00:19:16.015 } 00:19:16.015 } 00:19:16.015 ], 00:19:16.015 "mp_policy": "active_passive" 00:19:16.015 } 00:19:16.015 } 00:19:16.015 ] 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 [2024-05-15 03:13:46.960924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:16.015 [2024-05-15 03:13:46.960977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaac1a0 (9): Bad file descriptor 00:19:16.015 [2024-05-15 03:13:47.092541] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 [ 00:19:16.015 { 00:19:16.015 "name": "nvme0n1", 00:19:16.015 "aliases": [ 00:19:16.015 "54ec26bc-96fa-4da2-b125-dba41f0ab18e" 00:19:16.015 ], 00:19:16.015 "product_name": "NVMe disk", 00:19:16.015 "block_size": 512, 00:19:16.015 "num_blocks": 2097152, 00:19:16.015 "uuid": "54ec26bc-96fa-4da2-b125-dba41f0ab18e", 00:19:16.015 "assigned_rate_limits": { 00:19:16.015 "rw_ios_per_sec": 0, 00:19:16.015 "rw_mbytes_per_sec": 0, 00:19:16.015 "r_mbytes_per_sec": 0, 00:19:16.015 "w_mbytes_per_sec": 0 00:19:16.015 }, 00:19:16.015 "claimed": false, 00:19:16.015 "zoned": false, 00:19:16.015 "supported_io_types": { 00:19:16.015 "read": true, 00:19:16.015 "write": true, 00:19:16.015 "unmap": false, 00:19:16.015 "write_zeroes": true, 00:19:16.015 "flush": true, 00:19:16.015 "reset": true, 00:19:16.015 "compare": true, 00:19:16.015 "compare_and_write": true, 00:19:16.015 "abort": true, 00:19:16.015 "nvme_admin": true, 00:19:16.015 "nvme_io": true 00:19:16.015 }, 00:19:16.015 "memory_domains": [ 00:19:16.015 { 00:19:16.015 "dma_device_id": "system", 00:19:16.015 "dma_device_type": 1 00:19:16.015 } 00:19:16.015 ], 00:19:16.015 "driver_specific": { 00:19:16.015 "nvme": [ 00:19:16.015 { 00:19:16.015 "trid": { 00:19:16.015 "trtype": "TCP", 00:19:16.015 "adrfam": "IPv4", 00:19:16.015 "traddr": "10.0.0.2", 00:19:16.015 "trsvcid": "4420", 00:19:16.015 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:16.015 }, 00:19:16.015 "ctrlr_data": { 00:19:16.015 "cntlid": 2, 00:19:16.015 "vendor_id": "0x8086", 00:19:16.015 "model_number": "SPDK bdev Controller", 00:19:16.015 "serial_number": "00000000000000000000", 00:19:16.015 "firmware_revision": "24.05", 00:19:16.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.015 "oacs": { 00:19:16.015 "security": 0, 00:19:16.015 "format": 0, 00:19:16.015 "firmware": 0, 00:19:16.015 "ns_manage": 0 00:19:16.015 }, 00:19:16.015 "multi_ctrlr": true, 00:19:16.015 "ana_reporting": false 00:19:16.015 }, 00:19:16.015 "vs": { 00:19:16.015 "nvme_version": "1.3" 00:19:16.015 }, 00:19:16.015 "ns_data": { 00:19:16.015 "id": 1, 00:19:16.015 "can_share": true 00:19:16.015 } 00:19:16.015 } 00:19:16.015 ], 00:19:16.015 "mp_policy": "active_passive" 00:19:16.015 } 00:19:16.015 } 00:19:16.015 ] 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9TuiNSHtPU 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9TuiNSHtPU 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.015 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.015 [2024-05-15 03:13:47.149496] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.015 [2024-05-15 03:13:47.149591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9TuiNSHtPU 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.016 [2024-05-15 03:13:47.157515] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9TuiNSHtPU 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.016 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.016 [2024-05-15 03:13:47.165534] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.016 [2024-05-15 03:13:47.165567] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.274 nvme0n1 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.274 [ 00:19:16.274 { 00:19:16.274 "name": "nvme0n1", 00:19:16.274 "aliases": [ 00:19:16.274 "54ec26bc-96fa-4da2-b125-dba41f0ab18e" 00:19:16.274 ], 00:19:16.274 "product_name": "NVMe disk", 00:19:16.274 "block_size": 512, 00:19:16.274 "num_blocks": 2097152, 00:19:16.274 "uuid": "54ec26bc-96fa-4da2-b125-dba41f0ab18e", 00:19:16.274 "assigned_rate_limits": { 00:19:16.274 "rw_ios_per_sec": 0, 00:19:16.274 "rw_mbytes_per_sec": 0, 00:19:16.274 "r_mbytes_per_sec": 0, 00:19:16.274 "w_mbytes_per_sec": 0 00:19:16.274 }, 00:19:16.274 "claimed": false, 00:19:16.274 "zoned": false, 00:19:16.274 "supported_io_types": { 00:19:16.274 "read": true, 00:19:16.274 "write": true, 00:19:16.274 "unmap": false, 00:19:16.274 "write_zeroes": true, 00:19:16.274 "flush": true, 00:19:16.274 "reset": true, 00:19:16.274 "compare": true, 00:19:16.274 "compare_and_write": true, 00:19:16.274 "abort": true, 00:19:16.274 "nvme_admin": true, 00:19:16.274 "nvme_io": true 00:19:16.274 }, 00:19:16.274 "memory_domains": [ 00:19:16.274 { 00:19:16.274 "dma_device_id": "system", 00:19:16.274 "dma_device_type": 1 00:19:16.274 } 00:19:16.274 ], 00:19:16.274 "driver_specific": { 00:19:16.274 "nvme": [ 00:19:16.274 { 00:19:16.274 "trid": { 00:19:16.274 "trtype": "TCP", 00:19:16.274 "adrfam": "IPv4", 00:19:16.274 "traddr": "10.0.0.2", 00:19:16.274 "trsvcid": "4421", 00:19:16.274 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:16.274 }, 00:19:16.274 "ctrlr_data": { 00:19:16.274 "cntlid": 3, 00:19:16.274 "vendor_id": "0x8086", 00:19:16.274 "model_number": "SPDK bdev Controller", 00:19:16.274 "serial_number": "00000000000000000000", 00:19:16.274 "firmware_revision": "24.05", 00:19:16.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.274 "oacs": { 00:19:16.274 "security": 0, 00:19:16.274 "format": 0, 00:19:16.274 "firmware": 0, 00:19:16.274 "ns_manage": 0 00:19:16.274 }, 00:19:16.274 "multi_ctrlr": true, 00:19:16.274 "ana_reporting": false 00:19:16.274 }, 00:19:16.274 "vs": { 00:19:16.274 "nvme_version": "1.3" 00:19:16.274 }, 00:19:16.274 "ns_data": { 00:19:16.274 "id": 1, 00:19:16.274 "can_share": true 00:19:16.274 } 00:19:16.274 } 00:19:16.274 ], 00:19:16.274 "mp_policy": "active_passive" 00:19:16.274 } 00:19:16.274 } 00:19:16.274 ] 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9TuiNSHtPU 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.274 rmmod nvme_tcp 00:19:16.274 rmmod nvme_fabrics 00:19:16.274 rmmod nvme_keyring 00:19:16.274 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1090518 ']' 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1090518 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1090518 ']' 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1090518 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1090518 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1090518' 00:19:16.275 killing process with pid 1090518 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1090518 00:19:16.275 [2024-05-15 03:13:47.372969] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.275 [2024-05-15 03:13:47.372993] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:16.275 [2024-05-15 03:13:47.373001] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:16.275 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1090518 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.533 03:13:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.068 03:13:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.068 00:19:19.068 real 0m9.337s 00:19:19.068 user 0m3.483s 00:19:19.068 sys 0m4.389s 00:19:19.068 03:13:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:19.068 03:13:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:19.068 ************************************ 00:19:19.068 END TEST nvmf_async_init 00:19:19.068 ************************************ 00:19:19.068 03:13:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.068 ************************************ 00:19:19.068 START TEST dma 00:19:19.068 ************************************ 00:19:19.068 03:13:49 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:19.068 * Looking for test storage... 00:19:19.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:19.068 03:13:49 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.068 03:13:49 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.068 03:13:49 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.068 03:13:49 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.068 03:13:49 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.068 03:13:49 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.068 03:13:49 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.068 03:13:49 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:19:19.068 03:13:49 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.068 03:13:49 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.068 03:13:49 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:19.068 03:13:49 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:19:19.068 00:19:19.068 real 0m0.121s 00:19:19.068 user 0m0.056s 00:19:19.068 sys 0m0.073s 00:19:19.068 03:13:49 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:19.068 03:13:49 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:19:19.068 ************************************ 00:19:19.068 END TEST dma 00:19:19.068 ************************************ 00:19:19.068 03:13:49 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:19.068 03:13:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.068 ************************************ 00:19:19.068 START TEST nvmf_identify 00:19:19.068 ************************************ 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:19.069 * Looking for test storage... 00:19:19.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.069 03:13:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.069 03:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:24.343 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:24.343 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.343 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:24.344 Found net devices under 0000:86:00.0: cvl_0_0 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:24.344 Found net devices under 0000:86:00.1: cvl_0_1 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:19:24.344 00:19:24.344 --- 10.0.0.2 ping statistics --- 00:19:24.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.344 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:24.344 00:19:24.344 --- 10.0.0.1 ping statistics --- 00:19:24.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.344 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:24.344 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1094296 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1094296 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1094296 ']' 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.603 03:13:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:24.603 [2024-05-15 03:13:55.553237] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:24.603 [2024-05-15 03:13:55.553283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.603 [2024-05-15 03:13:55.613439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.603 [2024-05-15 03:13:55.695695] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.603 [2024-05-15 03:13:55.695733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.603 [2024-05-15 03:13:55.695739] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.603 [2024-05-15 03:13:55.695746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.603 [2024-05-15 03:13:55.695751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.603 [2024-05-15 03:13:55.695793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.603 [2024-05-15 03:13:55.695892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.603 [2024-05-15 03:13:55.695978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.603 [2024-05-15 03:13:55.695980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 [2024-05-15 03:13:56.381368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 Malloc0 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 [2024-05-15 03:13:56.464889] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:25.545 [2024-05-15 03:13:56.465121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.545 [ 00:19:25.545 { 00:19:25.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:25.545 "subtype": "Discovery", 00:19:25.545 "listen_addresses": [ 00:19:25.545 { 00:19:25.545 "trtype": "TCP", 00:19:25.545 "adrfam": "IPv4", 00:19:25.545 "traddr": "10.0.0.2", 00:19:25.545 "trsvcid": "4420" 00:19:25.545 } 00:19:25.545 ], 00:19:25.545 "allow_any_host": true, 00:19:25.545 "hosts": [] 00:19:25.545 }, 00:19:25.545 { 00:19:25.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.545 "subtype": "NVMe", 00:19:25.545 "listen_addresses": [ 00:19:25.545 { 00:19:25.545 "trtype": "TCP", 00:19:25.545 "adrfam": "IPv4", 00:19:25.545 "traddr": "10.0.0.2", 00:19:25.545 "trsvcid": "4420" 00:19:25.545 } 00:19:25.545 ], 00:19:25.545 "allow_any_host": true, 00:19:25.545 "hosts": [], 00:19:25.545 "serial_number": "SPDK00000000000001", 00:19:25.545 "model_number": "SPDK bdev Controller", 00:19:25.545 "max_namespaces": 32, 00:19:25.545 "min_cntlid": 1, 00:19:25.545 "max_cntlid": 65519, 00:19:25.545 "namespaces": [ 00:19:25.545 { 00:19:25.545 "nsid": 1, 00:19:25.545 "bdev_name": "Malloc0", 00:19:25.545 "name": "Malloc0", 00:19:25.545 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:25.545 "eui64": "ABCDEF0123456789", 00:19:25.545 "uuid": "858dc698-a3d1-410a-871d-2ed80aed04f0" 00:19:25.545 } 00:19:25.545 ] 00:19:25.545 } 00:19:25.545 ] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.545 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:25.545 [2024-05-15 03:13:56.516452] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:25.545 [2024-05-15 03:13:56.516506] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094544 ] 00:19:25.545 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.545 [2024-05-15 03:13:56.546003] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:25.546 [2024-05-15 03:13:56.546049] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:25.546 [2024-05-15 03:13:56.546054] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:25.546 [2024-05-15 03:13:56.546064] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:25.546 [2024-05-15 03:13:56.546072] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:25.546 [2024-05-15 03:13:56.546436] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:25.546 [2024-05-15 03:13:56.546462] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbfcc30 0 00:19:25.546 [2024-05-15 03:13:56.560471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:25.546 [2024-05-15 03:13:56.560484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:25.546 [2024-05-15 03:13:56.560491] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:25.546 [2024-05-15 03:13:56.560494] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:25.546 [2024-05-15 03:13:56.560528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.560534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.560537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.560550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:25.546 [2024-05-15 03:13:56.560565] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.568474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.568482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.568485] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.568501] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:25.546 [2024-05-15 03:13:56.568507] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:25.546 [2024-05-15 03:13:56.568512] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:25.546 [2024-05-15 03:13:56.568523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568529] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.568537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.568549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.568746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.568752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.568756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.568764] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:25.546 [2024-05-15 03:13:56.568771] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:25.546 [2024-05-15 03:13:56.568777] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.568790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.568800] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.568873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.568879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.568882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.568890] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:25.546 [2024-05-15 03:13:56.568897] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.568903] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.568909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.568916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.568925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.568995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.569000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.569003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.569010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.569018] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569022] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569025] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.569031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.569040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.569110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.569115] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.569118] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569124] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.569128] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:25.546 [2024-05-15 03:13:56.569132] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.569139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.569243] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:25.546 [2024-05-15 03:13:56.569248] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.569255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569258] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.569267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.569276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.569347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.569352] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.569355] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.569363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:25.546 [2024-05-15 03:13:56.569371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569374] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569377] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.569383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.569392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.569457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.546 [2024-05-15 03:13:56.569463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.546 [2024-05-15 03:13:56.569472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.546 [2024-05-15 03:13:56.569479] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:25.546 [2024-05-15 03:13:56.569483] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:25.546 [2024-05-15 03:13:56.569490] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:25.546 [2024-05-15 03:13:56.569500] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:25.546 [2024-05-15 03:13:56.569510] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.546 [2024-05-15 03:13:56.569521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.546 [2024-05-15 03:13:56.569531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.546 [2024-05-15 03:13:56.569629] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.546 [2024-05-15 03:13:56.569635] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.546 [2024-05-15 03:13:56.569638] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569641] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfcc30): datao=0, datal=4096, cccid=0 00:19:25.546 [2024-05-15 03:13:56.569645] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc64980) on tqpair(0xbfcc30): expected_datao=0, payload_size=4096 00:19:25.546 [2024-05-15 03:13:56.569649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569657] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.546 [2024-05-15 03:13:56.569661] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.569683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.569686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.569696] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:25.547 [2024-05-15 03:13:56.569701] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:25.547 [2024-05-15 03:13:56.569705] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:25.547 [2024-05-15 03:13:56.569709] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:25.547 [2024-05-15 03:13:56.569713] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:25.547 [2024-05-15 03:13:56.569717] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:25.547 [2024-05-15 03:13:56.569728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:25.547 [2024-05-15 03:13:56.569736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.547 [2024-05-15 03:13:56.569759] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.547 [2024-05-15 03:13:56.569868] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.569873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.569876] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64980) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.569885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.547 [2024-05-15 03:13:56.569904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.547 [2024-05-15 03:13:56.569920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.547 [2024-05-15 03:13:56.569936] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.547 [2024-05-15 03:13:56.569951] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:25.547 [2024-05-15 03:13:56.569961] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:25.547 [2024-05-15 03:13:56.569967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.569970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.569975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.547 [2024-05-15 03:13:56.569987] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64980, cid 0, qid 0 00:19:25.547 [2024-05-15 03:13:56.569991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64ae0, cid 1, qid 0 00:19:25.547 [2024-05-15 03:13:56.569995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64c40, cid 2, qid 0 00:19:25.547 [2024-05-15 03:13:56.569999] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.547 [2024-05-15 03:13:56.570003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64f00, cid 4, qid 0 00:19:25.547 [2024-05-15 03:13:56.570107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.570112] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.570115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64f00) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.570123] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:25.547 [2024-05-15 03:13:56.570127] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:25.547 [2024-05-15 03:13:56.570137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.570146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.547 [2024-05-15 03:13:56.570155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64f00, cid 4, qid 0 00:19:25.547 [2024-05-15 03:13:56.570235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.547 [2024-05-15 03:13:56.570241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.547 [2024-05-15 03:13:56.570244] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570247] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfcc30): datao=0, datal=4096, cccid=4 00:19:25.547 [2024-05-15 03:13:56.570251] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc64f00) on tqpair(0xbfcc30): expected_datao=0, payload_size=4096 00:19:25.547 [2024-05-15 03:13:56.570255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570260] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570263] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.570286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.570289] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64f00) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.570302] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:25.547 [2024-05-15 03:13:56.570323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.570332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.547 [2024-05-15 03:13:56.570338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.570349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.547 [2024-05-15 03:13:56.570362] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64f00, cid 4, qid 0 00:19:25.547 [2024-05-15 03:13:56.570367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc65060, cid 5, qid 0 00:19:25.547 [2024-05-15 03:13:56.570476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.547 [2024-05-15 03:13:56.570482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.547 [2024-05-15 03:13:56.570485] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570488] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfcc30): datao=0, datal=1024, cccid=4 00:19:25.547 [2024-05-15 03:13:56.570492] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc64f00) on tqpair(0xbfcc30): expected_datao=0, payload_size=1024 00:19:25.547 [2024-05-15 03:13:56.570496] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570501] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570505] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.570514] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.570517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.570520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc65060) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.615474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.547 [2024-05-15 03:13:56.615486] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.547 [2024-05-15 03:13:56.615489] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.615495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64f00) on tqpair=0xbfcc30 00:19:25.547 [2024-05-15 03:13:56.615506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.615510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfcc30) 00:19:25.547 [2024-05-15 03:13:56.615517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.547 [2024-05-15 03:13:56.615532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64f00, cid 4, qid 0 00:19:25.547 [2024-05-15 03:13:56.615731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.547 [2024-05-15 03:13:56.615736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.547 [2024-05-15 03:13:56.615739] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.547 [2024-05-15 03:13:56.615742] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfcc30): datao=0, datal=3072, cccid=4 00:19:25.547 [2024-05-15 03:13:56.615746] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc64f00) on tqpair(0xbfcc30): expected_datao=0, payload_size=3072 00:19:25.547 [2024-05-15 03:13:56.615750] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615773] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615777] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.548 [2024-05-15 03:13:56.615830] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.548 [2024-05-15 03:13:56.615833] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64f00) on tqpair=0xbfcc30 00:19:25.548 [2024-05-15 03:13:56.615843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbfcc30) 00:19:25.548 [2024-05-15 03:13:56.615852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.548 [2024-05-15 03:13:56.615865] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64f00, cid 4, qid 0 00:19:25.548 [2024-05-15 03:13:56.615941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.548 [2024-05-15 03:13:56.615946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.548 [2024-05-15 03:13:56.615949] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615952] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbfcc30): datao=0, datal=8, cccid=4 00:19:25.548 [2024-05-15 03:13:56.615956] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc64f00) on tqpair(0xbfcc30): expected_datao=0, payload_size=8 00:19:25.548 [2024-05-15 03:13:56.615959] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615964] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.615967] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.656620] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.548 [2024-05-15 03:13:56.656633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.548 [2024-05-15 03:13:56.656637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.548 [2024-05-15 03:13:56.656640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64f00) on tqpair=0xbfcc30 00:19:25.548 ===================================================== 00:19:25.548 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:25.548 ===================================================== 00:19:25.548 Controller Capabilities/Features 00:19:25.548 ================================ 00:19:25.548 Vendor ID: 0000 00:19:25.548 Subsystem Vendor ID: 0000 00:19:25.548 Serial Number: .................... 00:19:25.548 Model Number: ........................................ 00:19:25.548 Firmware Version: 24.05 00:19:25.548 Recommended Arb Burst: 0 00:19:25.548 IEEE OUI Identifier: 00 00 00 00:19:25.548 Multi-path I/O 00:19:25.548 May have multiple subsystem ports: No 00:19:25.548 May have multiple controllers: No 00:19:25.548 Associated with SR-IOV VF: No 00:19:25.548 Max Data Transfer Size: 131072 00:19:25.548 Max Number of Namespaces: 0 00:19:25.548 Max Number of I/O Queues: 1024 00:19:25.548 NVMe Specification Version (VS): 1.3 00:19:25.548 NVMe Specification Version (Identify): 1.3 00:19:25.548 Maximum Queue Entries: 128 00:19:25.548 Contiguous Queues Required: Yes 00:19:25.548 Arbitration Mechanisms Supported 00:19:25.548 Weighted Round Robin: Not Supported 00:19:25.548 Vendor Specific: Not Supported 00:19:25.548 Reset Timeout: 15000 ms 00:19:25.548 Doorbell Stride: 4 bytes 00:19:25.548 NVM Subsystem Reset: Not Supported 00:19:25.548 Command Sets Supported 00:19:25.548 NVM Command Set: Supported 00:19:25.548 Boot Partition: Not Supported 00:19:25.548 Memory Page Size Minimum: 4096 bytes 00:19:25.548 Memory Page Size Maximum: 4096 bytes 00:19:25.548 Persistent Memory Region: Not Supported 00:19:25.548 Optional Asynchronous Events Supported 00:19:25.548 Namespace Attribute Notices: Not Supported 00:19:25.548 Firmware Activation Notices: Not Supported 00:19:25.548 ANA Change Notices: Not Supported 00:19:25.548 PLE Aggregate Log Change Notices: Not Supported 00:19:25.548 LBA Status Info Alert Notices: Not Supported 00:19:25.548 EGE Aggregate Log Change Notices: Not Supported 00:19:25.548 Normal NVM Subsystem Shutdown event: Not Supported 00:19:25.548 Zone Descriptor Change Notices: Not Supported 00:19:25.548 Discovery Log Change Notices: Supported 00:19:25.548 Controller Attributes 00:19:25.548 128-bit Host Identifier: Not Supported 00:19:25.548 Non-Operational Permissive Mode: Not Supported 00:19:25.548 NVM Sets: Not Supported 00:19:25.548 Read Recovery Levels: Not Supported 00:19:25.548 Endurance Groups: Not Supported 00:19:25.548 Predictable Latency Mode: Not Supported 00:19:25.548 Traffic Based Keep ALive: Not Supported 00:19:25.548 Namespace Granularity: Not Supported 00:19:25.548 SQ Associations: Not Supported 00:19:25.548 UUID List: Not Supported 00:19:25.548 Multi-Domain Subsystem: Not Supported 00:19:25.548 Fixed Capacity Management: Not Supported 00:19:25.548 Variable Capacity Management: Not Supported 00:19:25.548 Delete Endurance Group: Not Supported 00:19:25.548 Delete NVM Set: Not Supported 00:19:25.548 Extended LBA Formats Supported: Not Supported 00:19:25.548 Flexible Data Placement Supported: Not Supported 00:19:25.548 00:19:25.548 Controller Memory Buffer Support 00:19:25.548 ================================ 00:19:25.548 Supported: No 00:19:25.548 00:19:25.548 Persistent Memory Region Support 00:19:25.548 ================================ 00:19:25.548 Supported: No 00:19:25.548 00:19:25.548 Admin Command Set Attributes 00:19:25.548 ============================ 00:19:25.548 Security Send/Receive: Not Supported 00:19:25.548 Format NVM: Not Supported 00:19:25.548 Firmware Activate/Download: Not Supported 00:19:25.548 Namespace Management: Not Supported 00:19:25.548 Device Self-Test: Not Supported 00:19:25.548 Directives: Not Supported 00:19:25.548 NVMe-MI: Not Supported 00:19:25.548 Virtualization Management: Not Supported 00:19:25.548 Doorbell Buffer Config: Not Supported 00:19:25.548 Get LBA Status Capability: Not Supported 00:19:25.548 Command & Feature Lockdown Capability: Not Supported 00:19:25.548 Abort Command Limit: 1 00:19:25.548 Async Event Request Limit: 4 00:19:25.548 Number of Firmware Slots: N/A 00:19:25.548 Firmware Slot 1 Read-Only: N/A 00:19:25.548 Firmware Activation Without Reset: N/A 00:19:25.548 Multiple Update Detection Support: N/A 00:19:25.548 Firmware Update Granularity: No Information Provided 00:19:25.548 Per-Namespace SMART Log: No 00:19:25.548 Asymmetric Namespace Access Log Page: Not Supported 00:19:25.548 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:25.548 Command Effects Log Page: Not Supported 00:19:25.548 Get Log Page Extended Data: Supported 00:19:25.548 Telemetry Log Pages: Not Supported 00:19:25.548 Persistent Event Log Pages: Not Supported 00:19:25.548 Supported Log Pages Log Page: May Support 00:19:25.548 Commands Supported & Effects Log Page: Not Supported 00:19:25.548 Feature Identifiers & Effects Log Page:May Support 00:19:25.548 NVMe-MI Commands & Effects Log Page: May Support 00:19:25.548 Data Area 4 for Telemetry Log: Not Supported 00:19:25.548 Error Log Page Entries Supported: 128 00:19:25.548 Keep Alive: Not Supported 00:19:25.548 00:19:25.548 NVM Command Set Attributes 00:19:25.548 ========================== 00:19:25.548 Submission Queue Entry Size 00:19:25.548 Max: 1 00:19:25.548 Min: 1 00:19:25.548 Completion Queue Entry Size 00:19:25.548 Max: 1 00:19:25.548 Min: 1 00:19:25.548 Number of Namespaces: 0 00:19:25.548 Compare Command: Not Supported 00:19:25.548 Write Uncorrectable Command: Not Supported 00:19:25.548 Dataset Management Command: Not Supported 00:19:25.548 Write Zeroes Command: Not Supported 00:19:25.548 Set Features Save Field: Not Supported 00:19:25.548 Reservations: Not Supported 00:19:25.548 Timestamp: Not Supported 00:19:25.548 Copy: Not Supported 00:19:25.548 Volatile Write Cache: Not Present 00:19:25.548 Atomic Write Unit (Normal): 1 00:19:25.548 Atomic Write Unit (PFail): 1 00:19:25.548 Atomic Compare & Write Unit: 1 00:19:25.548 Fused Compare & Write: Supported 00:19:25.548 Scatter-Gather List 00:19:25.548 SGL Command Set: Supported 00:19:25.548 SGL Keyed: Supported 00:19:25.548 SGL Bit Bucket Descriptor: Not Supported 00:19:25.548 SGL Metadata Pointer: Not Supported 00:19:25.548 Oversized SGL: Not Supported 00:19:25.548 SGL Metadata Address: Not Supported 00:19:25.548 SGL Offset: Supported 00:19:25.548 Transport SGL Data Block: Not Supported 00:19:25.548 Replay Protected Memory Block: Not Supported 00:19:25.548 00:19:25.548 Firmware Slot Information 00:19:25.548 ========================= 00:19:25.548 Active slot: 0 00:19:25.548 00:19:25.548 00:19:25.548 Error Log 00:19:25.548 ========= 00:19:25.548 00:19:25.548 Active Namespaces 00:19:25.548 ================= 00:19:25.548 Discovery Log Page 00:19:25.548 ================== 00:19:25.548 Generation Counter: 2 00:19:25.548 Number of Records: 2 00:19:25.548 Record Format: 0 00:19:25.548 00:19:25.548 Discovery Log Entry 0 00:19:25.548 ---------------------- 00:19:25.548 Transport Type: 3 (TCP) 00:19:25.548 Address Family: 1 (IPv4) 00:19:25.548 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:25.548 Entry Flags: 00:19:25.548 Duplicate Returned Information: 1 00:19:25.548 Explicit Persistent Connection Support for Discovery: 1 00:19:25.548 Transport Requirements: 00:19:25.548 Secure Channel: Not Required 00:19:25.548 Port ID: 0 (0x0000) 00:19:25.548 Controller ID: 65535 (0xffff) 00:19:25.549 Admin Max SQ Size: 128 00:19:25.549 Transport Service Identifier: 4420 00:19:25.549 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:25.549 Transport Address: 10.0.0.2 00:19:25.549 Discovery Log Entry 1 00:19:25.549 ---------------------- 00:19:25.549 Transport Type: 3 (TCP) 00:19:25.549 Address Family: 1 (IPv4) 00:19:25.549 Subsystem Type: 2 (NVM Subsystem) 00:19:25.549 Entry Flags: 00:19:25.549 Duplicate Returned Information: 0 00:19:25.549 Explicit Persistent Connection Support for Discovery: 0 00:19:25.549 Transport Requirements: 00:19:25.549 Secure Channel: Not Required 00:19:25.549 Port ID: 0 (0x0000) 00:19:25.549 Controller ID: 65535 (0xffff) 00:19:25.549 Admin Max SQ Size: 128 00:19:25.549 Transport Service Identifier: 4420 00:19:25.549 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:25.549 Transport Address: 10.0.0.2 [2024-05-15 03:13:56.656720] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:25.549 [2024-05-15 03:13:56.656733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.549 [2024-05-15 03:13:56.656739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.549 [2024-05-15 03:13:56.656746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.549 [2024-05-15 03:13:56.656751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.549 [2024-05-15 03:13:56.656759] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656762] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.656773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.656786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.656851] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.656857] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.656860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.656869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.656881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.656893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.656973] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.656979] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.656982] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.656985] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.656989] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:25.549 [2024-05-15 03:13:56.656993] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:25.549 [2024-05-15 03:13:56.657000] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657021] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657112] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657326] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657337] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657349] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657576] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657584] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.549 [2024-05-15 03:13:56.657596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.549 [2024-05-15 03:13:56.657607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.549 [2024-05-15 03:13:56.657684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.549 [2024-05-15 03:13:56.657690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.549 [2024-05-15 03:13:56.657693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.549 [2024-05-15 03:13:56.657696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.549 [2024-05-15 03:13:56.657704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.657716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.657725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.657798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.657804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.657807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.657818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657822] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.657830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.657839] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.657916] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.657921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.657924] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657927] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.657936] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.657942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.657948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.657957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658044] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658055] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658058] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658072] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658162] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658177] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658266] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658275] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658278] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658290] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658307] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658383] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658392] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658537] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658626] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658632] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658741] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658861] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.658893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.658961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.658967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.658970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658973] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.658981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.658988] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.550 [2024-05-15 03:13:56.658993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.550 [2024-05-15 03:13:56.659002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.550 [2024-05-15 03:13:56.659086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.550 [2024-05-15 03:13:56.659091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.550 [2024-05-15 03:13:56.659096] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.550 [2024-05-15 03:13:56.659099] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.550 [2024-05-15 03:13:56.659108] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.551 [2024-05-15 03:13:56.659120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.551 [2024-05-15 03:13:56.659129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.551 [2024-05-15 03:13:56.659202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.551 [2024-05-15 03:13:56.659208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.551 [2024-05-15 03:13:56.659211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.551 [2024-05-15 03:13:56.659222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.551 [2024-05-15 03:13:56.659234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.551 [2024-05-15 03:13:56.659243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.551 [2024-05-15 03:13:56.659319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.551 [2024-05-15 03:13:56.659324] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.551 [2024-05-15 03:13:56.659327] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659331] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.551 [2024-05-15 03:13:56.659339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659342] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.551 [2024-05-15 03:13:56.659351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.551 [2024-05-15 03:13:56.659359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.551 [2024-05-15 03:13:56.659427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.551 [2024-05-15 03:13:56.659433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.551 [2024-05-15 03:13:56.659436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.551 [2024-05-15 03:13:56.659447] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.659454] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbfcc30) 00:19:25.551 [2024-05-15 03:13:56.659459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.551 [2024-05-15 03:13:56.663477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc64da0, cid 3, qid 0 00:19:25.551 [2024-05-15 03:13:56.663636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.551 [2024-05-15 03:13:56.663642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.551 [2024-05-15 03:13:56.663645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.551 [2024-05-15 03:13:56.663650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc64da0) on tqpair=0xbfcc30 00:19:25.551 [2024-05-15 03:13:56.663657] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:25.551 00:19:25.551 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:25.551 [2024-05-15 03:13:56.698997] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:25.551 [2024-05-15 03:13:56.699031] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094546 ] 00:19:25.813 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.813 [2024-05-15 03:13:56.728691] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:25.813 [2024-05-15 03:13:56.728734] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:25.813 [2024-05-15 03:13:56.728739] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:25.813 [2024-05-15 03:13:56.728748] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:25.813 [2024-05-15 03:13:56.728755] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:25.813 [2024-05-15 03:13:56.729050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:25.813 [2024-05-15 03:13:56.729070] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2282c30 0 00:19:25.813 [2024-05-15 03:13:56.743473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:25.813 [2024-05-15 03:13:56.743485] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:25.813 [2024-05-15 03:13:56.743491] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:25.813 [2024-05-15 03:13:56.743494] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:25.813 [2024-05-15 03:13:56.743521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.743526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.743530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.813 [2024-05-15 03:13:56.743541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:25.813 [2024-05-15 03:13:56.743555] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.813 [2024-05-15 03:13:56.749767] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.813 [2024-05-15 03:13:56.749779] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.813 [2024-05-15 03:13:56.749782] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.749786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.813 [2024-05-15 03:13:56.749839] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:25.813 [2024-05-15 03:13:56.749846] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:25.813 [2024-05-15 03:13:56.749851] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:25.813 [2024-05-15 03:13:56.749861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.749865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.749872] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.813 [2024-05-15 03:13:56.749880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.813 [2024-05-15 03:13:56.749894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.813 [2024-05-15 03:13:56.750056] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.813 [2024-05-15 03:13:56.750062] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.813 [2024-05-15 03:13:56.750065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.750068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.813 [2024-05-15 03:13:56.750073] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:25.813 [2024-05-15 03:13:56.750079] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:25.813 [2024-05-15 03:13:56.750086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.750089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.750092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.813 [2024-05-15 03:13:56.750098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.813 [2024-05-15 03:13:56.750108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.813 [2024-05-15 03:13:56.750204] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.813 [2024-05-15 03:13:56.750209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.813 [2024-05-15 03:13:56.750212] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.750216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.813 [2024-05-15 03:13:56.750220] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:25.813 [2024-05-15 03:13:56.750227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:25.813 [2024-05-15 03:13:56.750233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.813 [2024-05-15 03:13:56.750236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.750244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.750254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.750354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.750359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.750362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.750370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:25.814 [2024-05-15 03:13:56.750378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.750390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.750401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.750472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.750479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.750482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.750489] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:25.814 [2024-05-15 03:13:56.750493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:25.814 [2024-05-15 03:13:56.750500] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:25.814 [2024-05-15 03:13:56.750605] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:25.814 [2024-05-15 03:13:56.750608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:25.814 [2024-05-15 03:13:56.750614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.750626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.750636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.750709] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.750715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.750717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.750725] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:25.814 [2024-05-15 03:13:56.750733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750737] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.750745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.750754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.750860] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.750865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.750868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.750875] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:25.814 [2024-05-15 03:13:56.750879] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.750886] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:25.814 [2024-05-15 03:13:56.750893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.750902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.750905] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.750911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.750921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.751028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.814 [2024-05-15 03:13:56.751033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.814 [2024-05-15 03:13:56.751036] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.751039] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=4096, cccid=0 00:19:25.814 [2024-05-15 03:13:56.751043] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ea980) on tqpair(0x2282c30): expected_datao=0, payload_size=4096 00:19:25.814 [2024-05-15 03:13:56.751047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.751094] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.751097] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.796484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.796487] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796491] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.796499] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:25.814 [2024-05-15 03:13:56.796503] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:25.814 [2024-05-15 03:13:56.796507] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:25.814 [2024-05-15 03:13:56.796510] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:25.814 [2024-05-15 03:13:56.796514] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:25.814 [2024-05-15 03:13:56.796519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.796530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.796537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.814 [2024-05-15 03:13:56.796563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.796745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.814 [2024-05-15 03:13:56.796750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.814 [2024-05-15 03:13:56.796753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22ea980) on tqpair=0x2282c30 00:19:25.814 [2024-05-15 03:13:56.796763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796766] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796771] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.814 [2024-05-15 03:13:56.796782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.814 [2024-05-15 03:13:56.796798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796801] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.814 [2024-05-15 03:13:56.796814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.814 [2024-05-15 03:13:56.796829] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.796838] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:25.814 [2024-05-15 03:13:56.796844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.814 [2024-05-15 03:13:56.796847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.814 [2024-05-15 03:13:56.796853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.814 [2024-05-15 03:13:56.796864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ea980, cid 0, qid 0 00:19:25.814 [2024-05-15 03:13:56.796868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaae0, cid 1, qid 0 00:19:25.814 [2024-05-15 03:13:56.796873] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eac40, cid 2, qid 0 00:19:25.815 [2024-05-15 03:13:56.796877] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.815 [2024-05-15 03:13:56.796881] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.796989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.796995] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.796998] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.797006] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:25.815 [2024-05-15 03:13:56.797011] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797033] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.815 [2024-05-15 03:13:56.797055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.797154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.797160] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.797163] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.797209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.815 [2024-05-15 03:13:56.797243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.797331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.815 [2024-05-15 03:13:56.797337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.815 [2024-05-15 03:13:56.797340] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797343] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=4096, cccid=4 00:19:25.815 [2024-05-15 03:13:56.797347] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eaf00) on tqpair(0x2282c30): expected_datao=0, payload_size=4096 00:19:25.815 [2024-05-15 03:13:56.797351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797357] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797360] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.797411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.797414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797417] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.797428] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:25.815 [2024-05-15 03:13:56.797435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.815 [2024-05-15 03:13:56.797477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.797569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.815 [2024-05-15 03:13:56.797575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.815 [2024-05-15 03:13:56.797578] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797581] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=4096, cccid=4 00:19:25.815 [2024-05-15 03:13:56.797585] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eaf00) on tqpair(0x2282c30): expected_datao=0, payload_size=4096 00:19:25.815 [2024-05-15 03:13:56.797588] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797594] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797597] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.797663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.797666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.797678] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797686] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797695] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.815 [2024-05-15 03:13:56.797711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.797794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.815 [2024-05-15 03:13:56.797800] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.815 [2024-05-15 03:13:56.797803] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797806] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=4096, cccid=4 00:19:25.815 [2024-05-15 03:13:56.797809] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eaf00) on tqpair(0x2282c30): expected_datao=0, payload_size=4096 00:19:25.815 [2024-05-15 03:13:56.797813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797819] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797822] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.797863] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.797866] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797869] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.797878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797885] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797891] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797896] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797908] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:25.815 [2024-05-15 03:13:56.797911] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:25.815 [2024-05-15 03:13:56.797916] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:25.815 [2024-05-15 03:13:56.797930] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.815 [2024-05-15 03:13:56.797945] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.797951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282c30) 00:19:25.815 [2024-05-15 03:13:56.797957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.815 [2024-05-15 03:13:56.797969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.815 [2024-05-15 03:13:56.797973] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb060, cid 5, qid 0 00:19:25.815 [2024-05-15 03:13:56.798096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.798102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.798104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.798108] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.798114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.815 [2024-05-15 03:13:56.798119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.815 [2024-05-15 03:13:56.798122] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.798125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb060) on tqpair=0x2282c30 00:19:25.815 [2024-05-15 03:13:56.798134] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.815 [2024-05-15 03:13:56.798137] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb060, cid 5, qid 0 00:19:25.816 [2024-05-15 03:13:56.798246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.798252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.798255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb060) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.798266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798284] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb060, cid 5, qid 0 00:19:25.816 [2024-05-15 03:13:56.798352] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.798358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.798361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb060) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.798372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798390] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb060, cid 5, qid 0 00:19:25.816 [2024-05-15 03:13:56.798500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.798506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.798509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb060) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.798522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798555] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2282c30) 00:19:25.816 [2024-05-15 03:13:56.798577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.816 [2024-05-15 03:13:56.798588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb060, cid 5, qid 0 00:19:25.816 [2024-05-15 03:13:56.798592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eaf00, cid 4, qid 0 00:19:25.816 [2024-05-15 03:13:56.798596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb1c0, cid 6, qid 0 00:19:25.816 [2024-05-15 03:13:56.798600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb320, cid 7, qid 0 00:19:25.816 [2024-05-15 03:13:56.798745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.816 [2024-05-15 03:13:56.798751] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.816 [2024-05-15 03:13:56.798754] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798757] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=8192, cccid=5 00:19:25.816 [2024-05-15 03:13:56.798761] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eb060) on tqpair(0x2282c30): expected_datao=0, payload_size=8192 00:19:25.816 [2024-05-15 03:13:56.798767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798851] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798855] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.816 [2024-05-15 03:13:56.798864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.816 [2024-05-15 03:13:56.798867] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798870] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=512, cccid=4 00:19:25.816 [2024-05-15 03:13:56.798874] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eaf00) on tqpair(0x2282c30): expected_datao=0, payload_size=512 00:19:25.816 [2024-05-15 03:13:56.798878] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798883] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798886] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.816 [2024-05-15 03:13:56.798895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.816 [2024-05-15 03:13:56.798898] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798901] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=512, cccid=6 00:19:25.816 [2024-05-15 03:13:56.798905] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eb1c0) on tqpair(0x2282c30): expected_datao=0, payload_size=512 00:19:25.816 [2024-05-15 03:13:56.798909] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798914] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798917] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798922] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:25.816 [2024-05-15 03:13:56.798926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:25.816 [2024-05-15 03:13:56.798929] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798932] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282c30): datao=0, datal=4096, cccid=7 00:19:25.816 [2024-05-15 03:13:56.798936] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22eb320) on tqpair(0x2282c30): expected_datao=0, payload_size=4096 00:19:25.816 [2024-05-15 03:13:56.798940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798945] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798948] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.798960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.798963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb060) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.798977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.798982] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.798985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.798989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eaf00) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.798996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.799001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.799005] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.799008] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb1c0) on tqpair=0x2282c30 00:19:25.816 [2024-05-15 03:13:56.799017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.816 [2024-05-15 03:13:56.799022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.816 [2024-05-15 03:13:56.799025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.816 [2024-05-15 03:13:56.799028] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb320) on tqpair=0x2282c30 00:19:25.816 ===================================================== 00:19:25.816 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.816 ===================================================== 00:19:25.816 Controller Capabilities/Features 00:19:25.816 ================================ 00:19:25.816 Vendor ID: 8086 00:19:25.816 Subsystem Vendor ID: 8086 00:19:25.816 Serial Number: SPDK00000000000001 00:19:25.816 Model Number: SPDK bdev Controller 00:19:25.816 Firmware Version: 24.05 00:19:25.816 Recommended Arb Burst: 6 00:19:25.816 IEEE OUI Identifier: e4 d2 5c 00:19:25.816 Multi-path I/O 00:19:25.816 May have multiple subsystem ports: Yes 00:19:25.816 May have multiple controllers: Yes 00:19:25.816 Associated with SR-IOV VF: No 00:19:25.816 Max Data Transfer Size: 131072 00:19:25.816 Max Number of Namespaces: 32 00:19:25.816 Max Number of I/O Queues: 127 00:19:25.816 NVMe Specification Version (VS): 1.3 00:19:25.816 NVMe Specification Version (Identify): 1.3 00:19:25.816 Maximum Queue Entries: 128 00:19:25.816 Contiguous Queues Required: Yes 00:19:25.816 Arbitration Mechanisms Supported 00:19:25.816 Weighted Round Robin: Not Supported 00:19:25.816 Vendor Specific: Not Supported 00:19:25.816 Reset Timeout: 15000 ms 00:19:25.816 Doorbell Stride: 4 bytes 00:19:25.816 NVM Subsystem Reset: Not Supported 00:19:25.816 Command Sets Supported 00:19:25.816 NVM Command Set: Supported 00:19:25.816 Boot Partition: Not Supported 00:19:25.816 Memory Page Size Minimum: 4096 bytes 00:19:25.816 Memory Page Size Maximum: 4096 bytes 00:19:25.816 Persistent Memory Region: Not Supported 00:19:25.816 Optional Asynchronous Events Supported 00:19:25.816 Namespace Attribute Notices: Supported 00:19:25.816 Firmware Activation Notices: Not Supported 00:19:25.816 ANA Change Notices: Not Supported 00:19:25.816 PLE Aggregate Log Change Notices: Not Supported 00:19:25.816 LBA Status Info Alert Notices: Not Supported 00:19:25.817 EGE Aggregate Log Change Notices: Not Supported 00:19:25.817 Normal NVM Subsystem Shutdown event: Not Supported 00:19:25.817 Zone Descriptor Change Notices: Not Supported 00:19:25.817 Discovery Log Change Notices: Not Supported 00:19:25.817 Controller Attributes 00:19:25.817 128-bit Host Identifier: Supported 00:19:25.817 Non-Operational Permissive Mode: Not Supported 00:19:25.817 NVM Sets: Not Supported 00:19:25.817 Read Recovery Levels: Not Supported 00:19:25.817 Endurance Groups: Not Supported 00:19:25.817 Predictable Latency Mode: Not Supported 00:19:25.817 Traffic Based Keep ALive: Not Supported 00:19:25.817 Namespace Granularity: Not Supported 00:19:25.817 SQ Associations: Not Supported 00:19:25.817 UUID List: Not Supported 00:19:25.817 Multi-Domain Subsystem: Not Supported 00:19:25.817 Fixed Capacity Management: Not Supported 00:19:25.817 Variable Capacity Management: Not Supported 00:19:25.817 Delete Endurance Group: Not Supported 00:19:25.817 Delete NVM Set: Not Supported 00:19:25.817 Extended LBA Formats Supported: Not Supported 00:19:25.817 Flexible Data Placement Supported: Not Supported 00:19:25.817 00:19:25.817 Controller Memory Buffer Support 00:19:25.817 ================================ 00:19:25.817 Supported: No 00:19:25.817 00:19:25.817 Persistent Memory Region Support 00:19:25.817 ================================ 00:19:25.817 Supported: No 00:19:25.817 00:19:25.817 Admin Command Set Attributes 00:19:25.817 ============================ 00:19:25.817 Security Send/Receive: Not Supported 00:19:25.817 Format NVM: Not Supported 00:19:25.817 Firmware Activate/Download: Not Supported 00:19:25.817 Namespace Management: Not Supported 00:19:25.817 Device Self-Test: Not Supported 00:19:25.817 Directives: Not Supported 00:19:25.817 NVMe-MI: Not Supported 00:19:25.817 Virtualization Management: Not Supported 00:19:25.817 Doorbell Buffer Config: Not Supported 00:19:25.817 Get LBA Status Capability: Not Supported 00:19:25.817 Command & Feature Lockdown Capability: Not Supported 00:19:25.817 Abort Command Limit: 4 00:19:25.817 Async Event Request Limit: 4 00:19:25.817 Number of Firmware Slots: N/A 00:19:25.817 Firmware Slot 1 Read-Only: N/A 00:19:25.817 Firmware Activation Without Reset: N/A 00:19:25.817 Multiple Update Detection Support: N/A 00:19:25.817 Firmware Update Granularity: No Information Provided 00:19:25.817 Per-Namespace SMART Log: No 00:19:25.817 Asymmetric Namespace Access Log Page: Not Supported 00:19:25.817 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:25.817 Command Effects Log Page: Supported 00:19:25.817 Get Log Page Extended Data: Supported 00:19:25.817 Telemetry Log Pages: Not Supported 00:19:25.817 Persistent Event Log Pages: Not Supported 00:19:25.817 Supported Log Pages Log Page: May Support 00:19:25.817 Commands Supported & Effects Log Page: Not Supported 00:19:25.817 Feature Identifiers & Effects Log Page:May Support 00:19:25.817 NVMe-MI Commands & Effects Log Page: May Support 00:19:25.817 Data Area 4 for Telemetry Log: Not Supported 00:19:25.817 Error Log Page Entries Supported: 128 00:19:25.817 Keep Alive: Supported 00:19:25.817 Keep Alive Granularity: 10000 ms 00:19:25.817 00:19:25.817 NVM Command Set Attributes 00:19:25.817 ========================== 00:19:25.817 Submission Queue Entry Size 00:19:25.817 Max: 64 00:19:25.817 Min: 64 00:19:25.817 Completion Queue Entry Size 00:19:25.817 Max: 16 00:19:25.817 Min: 16 00:19:25.817 Number of Namespaces: 32 00:19:25.817 Compare Command: Supported 00:19:25.817 Write Uncorrectable Command: Not Supported 00:19:25.817 Dataset Management Command: Supported 00:19:25.817 Write Zeroes Command: Supported 00:19:25.817 Set Features Save Field: Not Supported 00:19:25.817 Reservations: Supported 00:19:25.817 Timestamp: Not Supported 00:19:25.817 Copy: Supported 00:19:25.817 Volatile Write Cache: Present 00:19:25.817 Atomic Write Unit (Normal): 1 00:19:25.817 Atomic Write Unit (PFail): 1 00:19:25.817 Atomic Compare & Write Unit: 1 00:19:25.817 Fused Compare & Write: Supported 00:19:25.817 Scatter-Gather List 00:19:25.817 SGL Command Set: Supported 00:19:25.817 SGL Keyed: Supported 00:19:25.817 SGL Bit Bucket Descriptor: Not Supported 00:19:25.817 SGL Metadata Pointer: Not Supported 00:19:25.817 Oversized SGL: Not Supported 00:19:25.817 SGL Metadata Address: Not Supported 00:19:25.817 SGL Offset: Supported 00:19:25.817 Transport SGL Data Block: Not Supported 00:19:25.817 Replay Protected Memory Block: Not Supported 00:19:25.817 00:19:25.817 Firmware Slot Information 00:19:25.817 ========================= 00:19:25.817 Active slot: 1 00:19:25.817 Slot 1 Firmware Revision: 24.05 00:19:25.817 00:19:25.817 00:19:25.817 Commands Supported and Effects 00:19:25.817 ============================== 00:19:25.817 Admin Commands 00:19:25.817 -------------- 00:19:25.817 Get Log Page (02h): Supported 00:19:25.817 Identify (06h): Supported 00:19:25.817 Abort (08h): Supported 00:19:25.817 Set Features (09h): Supported 00:19:25.817 Get Features (0Ah): Supported 00:19:25.817 Asynchronous Event Request (0Ch): Supported 00:19:25.817 Keep Alive (18h): Supported 00:19:25.817 I/O Commands 00:19:25.817 ------------ 00:19:25.817 Flush (00h): Supported LBA-Change 00:19:25.817 Write (01h): Supported LBA-Change 00:19:25.817 Read (02h): Supported 00:19:25.817 Compare (05h): Supported 00:19:25.817 Write Zeroes (08h): Supported LBA-Change 00:19:25.817 Dataset Management (09h): Supported LBA-Change 00:19:25.817 Copy (19h): Supported LBA-Change 00:19:25.817 Unknown (79h): Supported LBA-Change 00:19:25.817 Unknown (7Ah): Supported 00:19:25.817 00:19:25.817 Error Log 00:19:25.817 ========= 00:19:25.817 00:19:25.817 Arbitration 00:19:25.817 =========== 00:19:25.817 Arbitration Burst: 1 00:19:25.817 00:19:25.817 Power Management 00:19:25.817 ================ 00:19:25.817 Number of Power States: 1 00:19:25.817 Current Power State: Power State #0 00:19:25.817 Power State #0: 00:19:25.817 Max Power: 0.00 W 00:19:25.817 Non-Operational State: Operational 00:19:25.817 Entry Latency: Not Reported 00:19:25.817 Exit Latency: Not Reported 00:19:25.817 Relative Read Throughput: 0 00:19:25.817 Relative Read Latency: 0 00:19:25.817 Relative Write Throughput: 0 00:19:25.817 Relative Write Latency: 0 00:19:25.817 Idle Power: Not Reported 00:19:25.817 Active Power: Not Reported 00:19:25.817 Non-Operational Permissive Mode: Not Supported 00:19:25.817 00:19:25.817 Health Information 00:19:25.817 ================== 00:19:25.817 Critical Warnings: 00:19:25.817 Available Spare Space: OK 00:19:25.817 Temperature: OK 00:19:25.817 Device Reliability: OK 00:19:25.817 Read Only: No 00:19:25.817 Volatile Memory Backup: OK 00:19:25.817 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:25.817 Temperature Threshold: [2024-05-15 03:13:56.799114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.817 [2024-05-15 03:13:56.799119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2282c30) 00:19:25.817 [2024-05-15 03:13:56.799125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.817 [2024-05-15 03:13:56.799136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eb320, cid 7, qid 0 00:19:25.817 [2024-05-15 03:13:56.799261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.817 [2024-05-15 03:13:56.799266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.817 [2024-05-15 03:13:56.799269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.817 [2024-05-15 03:13:56.799272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eb320) on tqpair=0x2282c30 00:19:25.817 [2024-05-15 03:13:56.799298] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:25.817 [2024-05-15 03:13:56.799309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.817 [2024-05-15 03:13:56.799315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.817 [2024-05-15 03:13:56.799320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.817 [2024-05-15 03:13:56.799325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.817 [2024-05-15 03:13:56.799332] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.817 [2024-05-15 03:13:56.799335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.817 [2024-05-15 03:13:56.799338] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.817 [2024-05-15 03:13:56.799344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.799355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.799461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.799472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.799475] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799478] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.799485] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.799496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.799510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.799611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.799616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.799619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.799629] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:25.818 [2024-05-15 03:13:56.799633] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:25.818 [2024-05-15 03:13:56.799641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.799652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.799662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.799733] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.799739] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.799741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799745] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.799754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.799766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.799775] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.799864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.799870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.799873] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799876] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.799884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.799891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.799896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.799905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.800015] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.800021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.800024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800027] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.800035] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800039] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.800047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.800056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.800165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.800170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.800175] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.800187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800190] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.800199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.800208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.800274] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.800279] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.800282] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.800293] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.800305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.800314] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.800417] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.800423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.800426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.800437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.800444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.800449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.800458] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.804475] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.804483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.804486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.804489] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.804499] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.804502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.804505] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282c30) 00:19:25.818 [2024-05-15 03:13:56.804512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.818 [2024-05-15 03:13:56.804522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22eada0, cid 3, qid 0 00:19:25.818 [2024-05-15 03:13:56.804716] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:25.818 [2024-05-15 03:13:56.804721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:25.818 [2024-05-15 03:13:56.804724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:25.818 [2024-05-15 03:13:56.804730] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22eada0) on tqpair=0x2282c30 00:19:25.818 [2024-05-15 03:13:56.804737] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:19:25.818 0 Kelvin (-273 Celsius) 00:19:25.818 Available Spare: 0% 00:19:25.818 Available Spare Threshold: 0% 00:19:25.818 Life Percentage Used: 0% 00:19:25.818 Data Units Read: 0 00:19:25.818 Data Units Written: 0 00:19:25.818 Host Read Commands: 0 00:19:25.818 Host Write Commands: 0 00:19:25.818 Controller Busy Time: 0 minutes 00:19:25.818 Power Cycles: 0 00:19:25.818 Power On Hours: 0 hours 00:19:25.818 Unsafe Shutdowns: 0 00:19:25.818 Unrecoverable Media Errors: 0 00:19:25.818 Lifetime Error Log Entries: 0 00:19:25.818 Warning Temperature Time: 0 minutes 00:19:25.818 Critical Temperature Time: 0 minutes 00:19:25.818 00:19:25.818 Number of Queues 00:19:25.818 ================ 00:19:25.818 Number of I/O Submission Queues: 127 00:19:25.818 Number of I/O Completion Queues: 127 00:19:25.818 00:19:25.818 Active Namespaces 00:19:25.818 ================= 00:19:25.818 Namespace ID:1 00:19:25.819 Error Recovery Timeout: Unlimited 00:19:25.819 Command Set Identifier: NVM (00h) 00:19:25.819 Deallocate: Supported 00:19:25.819 Deallocated/Unwritten Error: Not Supported 00:19:25.819 Deallocated Read Value: Unknown 00:19:25.819 Deallocate in Write Zeroes: Not Supported 00:19:25.819 Deallocated Guard Field: 0xFFFF 00:19:25.819 Flush: Supported 00:19:25.819 Reservation: Supported 00:19:25.819 Namespace Sharing Capabilities: Multiple Controllers 00:19:25.819 Size (in LBAs): 131072 (0GiB) 00:19:25.819 Capacity (in LBAs): 131072 (0GiB) 00:19:25.819 Utilization (in LBAs): 131072 (0GiB) 00:19:25.819 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:25.819 EUI64: ABCDEF0123456789 00:19:25.819 UUID: 858dc698-a3d1-410a-871d-2ed80aed04f0 00:19:25.819 Thin Provisioning: Not Supported 00:19:25.819 Per-NS Atomic Units: Yes 00:19:25.819 Atomic Boundary Size (Normal): 0 00:19:25.819 Atomic Boundary Size (PFail): 0 00:19:25.819 Atomic Boundary Offset: 0 00:19:25.819 Maximum Single Source Range Length: 65535 00:19:25.819 Maximum Copy Length: 65535 00:19:25.819 Maximum Source Range Count: 1 00:19:25.819 NGUID/EUI64 Never Reused: No 00:19:25.819 Namespace Write Protected: No 00:19:25.819 Number of LBA Formats: 1 00:19:25.819 Current LBA Format: LBA Format #00 00:19:25.819 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:25.819 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.819 rmmod nvme_tcp 00:19:25.819 rmmod nvme_fabrics 00:19:25.819 rmmod nvme_keyring 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1094296 ']' 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1094296 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1094296 ']' 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1094296 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1094296 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1094296' 00:19:25.819 killing process with pid 1094296 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1094296 00:19:25.819 [2024-05-15 03:13:56.924928] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:25.819 03:13:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1094296 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.077 03:13:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.615 03:13:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.615 00:19:28.615 real 0m9.322s 00:19:28.615 user 0m7.307s 00:19:28.615 sys 0m4.563s 00:19:28.615 03:13:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:28.615 03:13:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:28.615 ************************************ 00:19:28.615 END TEST nvmf_identify 00:19:28.615 ************************************ 00:19:28.615 03:13:59 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:28.615 03:13:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:28.615 03:13:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:28.615 03:13:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.615 ************************************ 00:19:28.615 START TEST nvmf_perf 00:19:28.615 ************************************ 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:28.615 * Looking for test storage... 00:19:28.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.615 03:13:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:33.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:33.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:33.944 Found net devices under 0000:86:00.0: cvl_0_0 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:33.944 Found net devices under 0000:86:00.1: cvl_0_1 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:33.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:19:33.944 00:19:33.944 --- 10.0.0.2 ping statistics --- 00:19:33.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.944 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:19:33.944 00:19:33.944 --- 10.0.0.1 ping statistics --- 00:19:33.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.944 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:19:33.944 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1098055 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1098055 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1098055 ']' 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:33.945 03:14:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:33.945 [2024-05-15 03:14:04.804045] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:33.945 [2024-05-15 03:14:04.804089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.945 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.945 [2024-05-15 03:14:04.860730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.945 [2024-05-15 03:14:04.933953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.945 [2024-05-15 03:14:04.933998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.945 [2024-05-15 03:14:04.934005] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.945 [2024-05-15 03:14:04.934010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.945 [2024-05-15 03:14:04.934015] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.945 [2024-05-15 03:14:04.934060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.945 [2024-05-15 03:14:04.934159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.945 [2024-05-15 03:14:04.934224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.945 [2024-05-15 03:14:04.934225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:34.514 03:14:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:37.803 03:14:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:37.803 03:14:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:37.803 03:14:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:19:37.803 03:14:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.062 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:38.062 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:19:38.062 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:38.062 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:38.062 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.062 [2024-05-15 03:14:09.202478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.321 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.321 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:38.321 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.579 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:38.579 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:38.839 03:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.839 [2024-05-15 03:14:09.986696] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:38.839 [2024-05-15 03:14:09.986949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.098 03:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:39.098 03:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:19:39.098 03:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:19:39.098 03:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:39.098 03:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:19:40.475 Initializing NVMe Controllers 00:19:40.475 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:19:40.475 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:19:40.475 Initialization complete. Launching workers. 00:19:40.475 ======================================================== 00:19:40.475 Latency(us) 00:19:40.475 Device Information : IOPS MiB/s Average min max 00:19:40.475 PCIE (0000:5e:00.0) NSID 1 from core 0: 97357.87 380.30 328.27 35.49 7225.09 00:19:40.475 ======================================================== 00:19:40.475 Total : 97357.87 380.30 328.27 35.49 7225.09 00:19:40.475 00:19:40.475 03:14:11 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.475 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.853 Initializing NVMe Controllers 00:19:41.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:41.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:41.853 Initialization complete. Launching workers. 00:19:41.853 ======================================================== 00:19:41.853 Latency(us) 00:19:41.853 Device Information : IOPS MiB/s Average min max 00:19:41.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 0.28 14277.70 135.29 44662.22 00:19:41.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14709.93 4988.65 47884.69 00:19:41.853 ======================================================== 00:19:41.853 Total : 142.00 0.55 14493.81 135.29 47884.69 00:19:41.853 00:19:41.853 03:14:12 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.853 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.790 Initializing NVMe Controllers 00:19:42.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:42.790 Initialization complete. Launching workers. 00:19:42.790 ======================================================== 00:19:42.790 Latency(us) 00:19:42.790 Device Information : IOPS MiB/s Average min max 00:19:42.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10937.86 42.73 2927.24 341.76 6208.59 00:19:42.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3951.59 15.44 8140.48 7135.71 15463.89 00:19:42.790 ======================================================== 00:19:42.790 Total : 14889.45 58.16 4310.81 341.76 15463.89 00:19:42.790 00:19:43.049 03:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:19:43.049 03:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:19:43.049 03:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:43.049 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.581 Initializing NVMe Controllers 00:19:45.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.581 Controller IO queue size 128, less than required. 00:19:45.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.582 Controller IO queue size 128, less than required. 00:19:45.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:45.582 Initialization complete. Launching workers. 00:19:45.582 ======================================================== 00:19:45.582 Latency(us) 00:19:45.582 Device Information : IOPS MiB/s Average min max 00:19:45.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1511.43 377.86 86424.20 46969.67 135707.51 00:19:45.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.47 150.87 219490.05 62362.16 346805.34 00:19:45.582 ======================================================== 00:19:45.582 Total : 2114.91 528.73 124393.58 46969.67 346805.34 00:19:45.582 00:19:45.582 03:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:45.582 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.839 No valid NVMe controllers or AIO or URING devices found 00:19:45.839 Initializing NVMe Controllers 00:19:45.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.839 Controller IO queue size 128, less than required. 00:19:45.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.839 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:45.839 Controller IO queue size 128, less than required. 00:19:45.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.839 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:45.839 WARNING: Some requested NVMe devices were skipped 00:19:45.839 03:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:45.839 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.373 Initializing NVMe Controllers 00:19:48.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.373 Controller IO queue size 128, less than required. 00:19:48.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:48.373 Controller IO queue size 128, less than required. 00:19:48.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:48.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:48.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:48.373 Initialization complete. Launching workers. 00:19:48.373 00:19:48.373 ==================== 00:19:48.373 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:48.373 TCP transport: 00:19:48.373 polls: 18585 00:19:48.373 idle_polls: 8739 00:19:48.373 sock_completions: 9846 00:19:48.373 nvme_completions: 6021 00:19:48.373 submitted_requests: 9028 00:19:48.373 queued_requests: 1 00:19:48.373 00:19:48.373 ==================== 00:19:48.373 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:48.373 TCP transport: 00:19:48.373 polls: 17974 00:19:48.373 idle_polls: 8312 00:19:48.373 sock_completions: 9662 00:19:48.373 nvme_completions: 6161 00:19:48.373 submitted_requests: 9344 00:19:48.373 queued_requests: 1 00:19:48.373 ======================================================== 00:19:48.373 Latency(us) 00:19:48.373 Device Information : IOPS MiB/s Average min max 00:19:48.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1502.59 375.65 87201.65 49890.91 143863.49 00:19:48.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1537.53 384.38 84316.58 32925.40 121293.82 00:19:48.373 ======================================================== 00:19:48.373 Total : 3040.12 760.03 85742.54 32925.40 143863.49 00:19:48.373 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.373 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.373 rmmod nvme_tcp 00:19:48.373 rmmod nvme_fabrics 00:19:48.373 rmmod nvme_keyring 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1098055 ']' 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1098055 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1098055 ']' 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1098055 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1098055 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1098055' 00:19:48.632 killing process with pid 1098055 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1098055 00:19:48.632 [2024-05-15 03:14:19.579103] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:48.632 03:14:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1098055 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.010 03:14:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.544 03:14:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.544 00:19:52.544 real 0m23.867s 00:19:52.544 user 1m4.435s 00:19:52.544 sys 0m7.238s 00:19:52.544 03:14:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:52.544 03:14:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.544 ************************************ 00:19:52.544 END TEST nvmf_perf 00:19:52.544 ************************************ 00:19:52.544 03:14:23 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:52.544 03:14:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:52.544 03:14:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:52.544 03:14:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.544 ************************************ 00:19:52.544 START TEST nvmf_fio_host 00:19:52.544 ************************************ 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:52.544 * Looking for test storage... 00:19:52.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.544 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.545 03:14:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.830 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.830 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.831 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:19:57.831 00:19:57.831 --- 10.0.0.2 ping statistics --- 00:19:57.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.831 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:19:57.831 00:19:57.831 --- 10.0.0.1 ping statistics --- 00:19:57.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.831 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=1104156 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 1104156 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1104156 ']' 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:57.831 03:14:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.831 [2024-05-15 03:14:28.628842] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:19:57.831 [2024-05-15 03:14:28.628885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.831 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.831 [2024-05-15 03:14:28.685756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.831 [2024-05-15 03:14:28.770710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.831 [2024-05-15 03:14:28.770743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.831 [2024-05-15 03:14:28.770750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.831 [2024-05-15 03:14:28.770757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.831 [2024-05-15 03:14:28.770762] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.831 [2024-05-15 03:14:28.770804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.831 [2024-05-15 03:14:28.770901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.831 [2024-05-15 03:14:28.770962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.831 [2024-05-15 03:14:28.770963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 [2024-05-15 03:14:29.456334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 Malloc1 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 [2024-05-15 03:14:29.540086] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:58.401 [2024-05-15 03:14:29.540326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:19:58.401 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:58.707 03:14:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.707 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:58.707 fio-3.35 00:19:58.707 Starting 1 thread 00:19:58.964 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.483 00:20:01.483 test: (groupid=0, jobs=1): err= 0: pid=1104519: Wed May 15 03:14:32 2024 00:20:01.483 read: IOPS=11.6k, BW=45.4MiB/s (47.7MB/s)(91.1MiB/2005msec) 00:20:01.483 slat (nsec): min=1569, max=247538, avg=1737.83, stdev=2232.00 00:20:01.483 clat (usec): min=3211, max=10912, avg=6102.22, stdev=447.00 00:20:01.483 lat (usec): min=3239, max=10914, avg=6103.95, stdev=446.96 00:20:01.483 clat percentiles (usec): 00:20:01.483 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:20:01.483 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:20:01.483 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:20:01.483 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[ 9765], 00:20:01.483 | 99.99th=[10290] 00:20:01.483 bw ( KiB/s): min=45648, max=46960, per=99.94%, avg=46512.00, stdev=590.63, samples=4 00:20:01.483 iops : min=11412, max=11740, avg=11628.00, stdev=147.66, samples=4 00:20:01.483 write: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(90.5MiB/2005msec); 0 zone resets 00:20:01.483 slat (nsec): min=1629, max=222532, avg=1818.85, stdev=1639.59 00:20:01.483 clat (usec): min=2451, max=8772, avg=4891.99, stdev=364.00 00:20:01.483 lat (usec): min=2466, max=8774, avg=4893.81, stdev=364.01 00:20:01.483 clat percentiles (usec): 00:20:01.483 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:20:01.483 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:20:01.483 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:20:01.483 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6849], 99.95th=[ 7701], 00:20:01.483 | 99.99th=[ 8717] 00:20:01.483 bw ( KiB/s): min=45880, max=46664, per=100.00%, avg=46212.00, stdev=341.82, samples=4 00:20:01.483 iops : min=11470, max=11666, avg=11553.00, stdev=85.46, samples=4 00:20:01.483 lat (msec) : 4=0.49%, 10=99.49%, 20=0.02% 00:20:01.483 cpu : usr=73.05%, sys=24.65%, ctx=60, majf=0, minf=4 00:20:01.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:01.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:01.483 issued rwts: total=23328,23161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:01.483 00:20:01.483 Run status group 0 (all jobs): 00:20:01.483 READ: bw=45.4MiB/s (47.7MB/s), 45.4MiB/s-45.4MiB/s (47.7MB/s-47.7MB/s), io=91.1MiB (95.6MB), run=2005-2005msec 00:20:01.483 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.5MiB (94.9MB), run=2005-2005msec 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:01.483 03:14:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:01.483 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:01.483 fio-3.35 00:20:01.483 Starting 1 thread 00:20:01.483 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.008 00:20:04.008 test: (groupid=0, jobs=1): err= 0: pid=1105096: Wed May 15 03:14:34 2024 00:20:04.008 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2008msec) 00:20:04.008 slat (nsec): min=2633, max=82342, avg=2916.79, stdev=1195.90 00:20:04.008 clat (usec): min=1763, max=13773, avg=7001.82, stdev=1711.76 00:20:04.008 lat (usec): min=1766, max=13776, avg=7004.73, stdev=1711.86 00:20:04.008 clat percentiles (usec): 00:20:04.008 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5473], 00:20:04.008 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7504], 00:20:04.008 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9765], 00:20:04.008 | 99.00th=[11338], 99.50th=[11994], 99.90th=[13566], 99.95th=[13698], 00:20:04.008 | 99.99th=[13698] 00:20:04.008 bw ( KiB/s): min=82336, max=95360, per=49.95%, avg=86288.00, stdev=6153.80, samples=4 00:20:04.008 iops : min= 5146, max= 5960, avg=5393.00, stdev=384.61, samples=4 00:20:04.008 write: IOPS=6204, BW=96.9MiB/s (102MB/s)(176MiB/1820msec); 0 zone resets 00:20:04.008 slat (usec): min=30, max=375, avg=32.01, stdev= 6.72 00:20:04.008 clat (usec): min=3126, max=14719, avg=8621.87, stdev=1538.63 00:20:04.008 lat (usec): min=3156, max=14751, avg=8653.89, stdev=1539.66 00:20:04.008 clat percentiles (usec): 00:20:04.008 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:20:04.008 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:20:04.008 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:20:04.008 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13698], 99.95th=[14353], 00:20:04.008 | 99.99th=[14615] 00:20:04.008 bw ( KiB/s): min=85856, max=99200, per=90.73%, avg=90072.00, stdev=6179.09, samples=4 00:20:04.008 iops : min= 5366, max= 6200, avg=5629.50, stdev=386.19, samples=4 00:20:04.008 lat (msec) : 2=0.02%, 4=1.85%, 10=88.95%, 20=9.18% 00:20:04.008 cpu : usr=86.01%, sys=12.75%, ctx=25, majf=0, minf=1 00:20:04.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:04.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:04.008 issued rwts: total=21678,11292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:04.008 00:20:04.008 Run status group 0 (all jobs): 00:20:04.008 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (355MB), run=2008-2008msec 00:20:04.008 WRITE: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=176MiB (185MB), run=1820-1820msec 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.008 rmmod nvme_tcp 00:20:04.008 rmmod nvme_fabrics 00:20:04.008 rmmod nvme_keyring 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1104156 ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1104156 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1104156 ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1104156 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1104156 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1104156' 00:20:04.008 killing process with pid 1104156 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1104156 00:20:04.008 [2024-05-15 03:14:34.939284] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:04.008 03:14:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1104156 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.266 03:14:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.165 03:14:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.165 00:20:06.165 real 0m14.003s 00:20:06.165 user 0m41.059s 00:20:06.165 sys 0m5.618s 00:20:06.165 03:14:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:06.165 03:14:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 ************************************ 00:20:06.165 END TEST nvmf_fio_host 00:20:06.165 ************************************ 00:20:06.165 03:14:37 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:06.165 03:14:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:06.165 03:14:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:06.165 03:14:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 ************************************ 00:20:06.165 START TEST nvmf_failover 00:20:06.165 ************************************ 00:20:06.165 03:14:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:06.423 * Looking for test storage... 00:20:06.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.423 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.424 03:14:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.685 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.685 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.685 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.685 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.685 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:20:11.685 00:20:11.685 --- 10.0.0.2 ping statistics --- 00:20:11.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.686 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:20:11.686 00:20:11.686 --- 10.0.0.1 ping statistics --- 00:20:11.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.686 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1108835 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1108835 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1108835 ']' 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:11.686 03:14:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:11.686 [2024-05-15 03:14:42.591666] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:11.686 [2024-05-15 03:14:42.591708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.686 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.686 [2024-05-15 03:14:42.649383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:11.686 [2024-05-15 03:14:42.727835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.686 [2024-05-15 03:14:42.727873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.686 [2024-05-15 03:14:42.727880] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.686 [2024-05-15 03:14:42.727886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.686 [2024-05-15 03:14:42.727890] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.686 [2024-05-15 03:14:42.727997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.686 [2024-05-15 03:14:42.728018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.686 [2024-05-15 03:14:42.728020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.249 03:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:12.249 03:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:20:12.249 03:14:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.249 03:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.249 03:14:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:12.512 03:14:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.512 03:14:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:12.512 [2024-05-15 03:14:43.593611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.512 03:14:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:12.775 Malloc0 00:20:12.775 03:14:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.031 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:13.031 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.288 [2024-05-15 03:14:44.333903] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:13.288 [2024-05-15 03:14:44.334143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.288 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:13.545 [2024-05-15 03:14:44.510588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:13.545 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:13.545 [2024-05-15 03:14:44.683108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1109099 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1109099 /var/tmp/bdevperf.sock 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1109099 ']' 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.802 03:14:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:14.732 03:14:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.732 03:14:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:20:14.732 03:14:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:14.989 NVMe0n1 00:20:14.989 03:14:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:15.245 00:20:15.245 03:14:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1109438 00:20:15.245 03:14:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.245 03:14:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:16.616 03:14:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.616 [2024-05-15 03:14:47.509964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.616 [2024-05-15 03:14:47.510215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 [2024-05-15 03:14:47.510317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8f00 is same with the state(5) to be set 00:20:16.617 03:14:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:19.892 03:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:19.892 00:20:19.892 03:14:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:20.150 [2024-05-15 03:14:51.153708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.153996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 [2024-05-15 03:14:51.154114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9d90 is same with the state(5) to be set 00:20:20.150 03:14:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:23.421 03:14:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.421 [2024-05-15 03:14:54.367108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.421 03:14:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:24.379 03:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:24.636 [2024-05-15 03:14:55.561515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 [2024-05-15 03:14:55.561676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eac60 is same with the state(5) to be set 00:20:24.636 03:14:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1109438 00:20:31.192 0 00:20:31.192 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1109099 00:20:31.192 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1109099 ']' 00:20:31.192 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1109099 00:20:31.192 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:31.192 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1109099 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1109099' 00:20:31.193 killing process with pid 1109099 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1109099 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1109099 00:20:31.193 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:31.193 [2024-05-15 03:14:44.744104] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:31.193 [2024-05-15 03:14:44.744157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109099 ] 00:20:31.193 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.193 [2024-05-15 03:14:44.798954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.193 [2024-05-15 03:14:44.872638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.193 Running I/O for 15 seconds... 00:20:31.193 [2024-05-15 03:14:47.511046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.193 [2024-05-15 03:14:47.511453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.193 [2024-05-15 03:14:47.511461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.194 [2024-05-15 03:14:47.511866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.194 [2024-05-15 03:14:47.511975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.194 [2024-05-15 03:14:47.511981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.511989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.511997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.195 [2024-05-15 03:14:47.512509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.195 [2024-05-15 03:14:47.512517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.196 [2024-05-15 03:14:47.512523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.196 [2024-05-15 03:14:47.512538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.196 [2024-05-15 03:14:47.512554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.196 [2024-05-15 03:14:47.512569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92464 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92472 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92480 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92488 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92496 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92504 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92512 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92520 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92528 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92536 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92544 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92552 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92560 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92568 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92576 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92584 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.512976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.512981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.512987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92592 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.512993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.513000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.196 [2024-05-15 03:14:47.513005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.196 [2024-05-15 03:14:47.513010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92600 len:8 PRP1 0x0 PRP2 0x0 00:20:31.196 [2024-05-15 03:14:47.513016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.196 [2024-05-15 03:14:47.513023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.513027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.513033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92608 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.513045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.513050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92616 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92632 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92640 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92656 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.523975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.523981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92064 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.523996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.197 [2024-05-15 03:14:47.524001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.197 [2024-05-15 03:14:47.524007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92072 len:8 PRP1 0x0 PRP2 0x0 00:20:31.197 [2024-05-15 03:14:47.524015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.524057] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1596230 was disconnected and freed. reset controller. 00:20:31.197 [2024-05-15 03:14:47.524072] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:31.197 [2024-05-15 03:14:47.524095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.197 [2024-05-15 03:14:47.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.524111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.197 [2024-05-15 03:14:47.524118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.524126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.197 [2024-05-15 03:14:47.524133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.524141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.197 [2024-05-15 03:14:47.524148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:47.524159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.197 [2024-05-15 03:14:47.524182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577400 (9): Bad file descriptor 00:20:31.197 [2024-05-15 03:14:47.527437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.197 [2024-05-15 03:14:47.676149] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:31.197 [2024-05-15 03:14:51.154303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.197 [2024-05-15 03:14:51.154518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.197 [2024-05-15 03:14:51.154526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.198 [2024-05-15 03:14:51.154871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.198 [2024-05-15 03:14:51.154880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.154986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.154994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.199 [2024-05-15 03:14:51.155145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.199 [2024-05-15 03:14:51.155362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.199 [2024-05-15 03:14:51.155370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.200 [2024-05-15 03:14:51.155845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.155883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52088 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.155904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.155909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52096 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.155916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.155927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.155933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52104 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.155939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.155950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.155957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52112 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.155964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.155975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.155980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52120 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.155987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.155993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.155998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.156004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.156012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.156018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.156023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.156028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52136 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.156035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.156041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.156046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52144 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.156058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.156064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.156069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.200 [2024-05-15 03:14:51.156074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52152 len:8 PRP1 0x0 PRP2 0x0 00:20:31.200 [2024-05-15 03:14:51.156080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.200 [2024-05-15 03:14:51.156087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.200 [2024-05-15 03:14:51.156092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52160 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52168 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52176 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52184 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52192 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52200 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52208 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52216 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.156284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52224 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.156297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.156302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52232 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52240 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52248 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52256 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52264 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51688 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.201 [2024-05-15 03:14:51.168926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.201 [2024-05-15 03:14:51.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51696 len:8 PRP1 0x0 PRP2 0x0 00:20:31.201 [2024-05-15 03:14:51.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.168987] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1740e00 was disconnected and freed. reset controller. 00:20:31.201 [2024-05-15 03:14:51.168998] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:31.201 [2024-05-15 03:14:51.169022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.201 [2024-05-15 03:14:51.169032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.169041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.201 [2024-05-15 03:14:51.169050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.169059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.201 [2024-05-15 03:14:51.169068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.169077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.201 [2024-05-15 03:14:51.169086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.201 [2024-05-15 03:14:51.169094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.201 [2024-05-15 03:14:51.169129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577400 (9): Bad file descriptor 00:20:31.202 [2024-05-15 03:14:51.173045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.202 [2024-05-15 03:14:51.201226] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:31.202 [2024-05-15 03:14:55.563338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.202 [2024-05-15 03:14:55.563585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.202 [2024-05-15 03:14:55.563837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.202 [2024-05-15 03:14:55.563844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.203 [2024-05-15 03:14:55.564190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.203 [2024-05-15 03:14:55.564198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.204 [2024-05-15 03:14:55.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47304 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47312 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47328 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47336 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47344 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47352 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47368 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47384 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47408 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.204 [2024-05-15 03:14:55.564669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.204 [2024-05-15 03:14:55.564676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.204 [2024-05-15 03:14:55.564681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:20:31.204 [2024-05-15 03:14:55.564687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47464 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47472 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47480 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47488 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47496 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47504 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47512 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47520 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47528 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47536 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.564982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.564989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.564994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.564999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47544 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47552 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47560 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47568 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47576 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47584 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47592 len:8 PRP1 0x0 PRP2 0x0 00:20:31.205 [2024-05-15 03:14:55.565151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.205 [2024-05-15 03:14:55.565157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.205 [2024-05-15 03:14:55.565162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.205 [2024-05-15 03:14:55.565167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47600 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47608 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47616 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47624 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47632 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47640 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47648 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47656 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47664 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.565369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.565374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47672 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.565380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.565388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.575916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.575928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47680 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.575939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.575948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.575956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.575963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47688 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.575971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.575981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.575987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.575994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47696 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47704 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47712 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47728 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47736 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47744 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.206 [2024-05-15 03:14:55.576213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47752 len:8 PRP1 0x0 PRP2 0x0 00:20:31.206 [2024-05-15 03:14:55.576221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.206 [2024-05-15 03:14:55.576230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.206 [2024-05-15 03:14:55.576237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47760 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47768 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46864 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46872 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46880 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46888 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46896 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46904 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.207 [2024-05-15 03:14:55.576491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.207 [2024-05-15 03:14:55.576498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46912 len:8 PRP1 0x0 PRP2 0x0 00:20:31.207 [2024-05-15 03:14:55.576506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576553] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x159ab00 was disconnected and freed. reset controller. 00:20:31.207 [2024-05-15 03:14:55.576565] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:31.207 [2024-05-15 03:14:55.576589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.207 [2024-05-15 03:14:55.576600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.207 [2024-05-15 03:14:55.576618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.207 [2024-05-15 03:14:55.576636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.207 [2024-05-15 03:14:55.576655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.207 [2024-05-15 03:14:55.576663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.207 [2024-05-15 03:14:55.576689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577400 (9): Bad file descriptor 00:20:31.207 [2024-05-15 03:14:55.580590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:31.207 [2024-05-15 03:14:55.621570] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:31.207 00:20:31.207 Latency(us) 00:20:31.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.207 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.207 Verification LBA range: start 0x0 length 0x4000 00:20:31.207 NVMe0n1 : 15.00 10634.03 41.54 628.98 0.00 11341.27 452.34 23251.03 00:20:31.207 =================================================================================================================== 00:20:31.207 Total : 10634.03 41.54 628.98 0.00 11341.27 452.34 23251.03 00:20:31.207 Received shutdown signal, test time was about 15.000000 seconds 00:20:31.207 00:20:31.207 Latency(us) 00:20:31.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.207 =================================================================================================================== 00:20:31.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1112021 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1112021 /var/tmp/bdevperf.sock 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1112021 ']' 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:31.207 03:15:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:31.468 03:15:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:31.468 03:15:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:20:31.468 03:15:02 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:31.728 [2024-05-15 03:15:02.763415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:31.728 03:15:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:31.987 [2024-05-15 03:15:02.943893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:31.987 03:15:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:32.246 NVMe0n1 00:20:32.504 03:15:03 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:32.762 00:20:32.762 03:15:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:33.330 00:20:33.330 03:15:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:33.330 03:15:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:33.330 03:15:04 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:33.589 03:15:04 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:36.877 03:15:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.877 03:15:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:36.877 03:15:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.877 03:15:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1113524 00:20:36.877 03:15:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1113524 00:20:37.812 0 00:20:37.813 03:15:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:37.813 [2024-05-15 03:15:01.800857] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:37.813 [2024-05-15 03:15:01.800912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112021 ] 00:20:37.813 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.813 [2024-05-15 03:15:01.856029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.813 [2024-05-15 03:15:01.928262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.813 [2024-05-15 03:15:04.584442] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:37.813 [2024-05-15 03:15:04.584496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.813 [2024-05-15 03:15:04.584507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.813 [2024-05-15 03:15:04.584516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.813 [2024-05-15 03:15:04.584523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.813 [2024-05-15 03:15:04.584530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.813 [2024-05-15 03:15:04.584537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.813 [2024-05-15 03:15:04.584545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.813 [2024-05-15 03:15:04.584551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.813 [2024-05-15 03:15:04.584558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.813 [2024-05-15 03:15:04.584579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.813 [2024-05-15 03:15:04.584593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fc400 (9): Bad file descriptor 00:20:37.813 [2024-05-15 03:15:04.632737] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:37.813 Running I/O for 1 seconds... 00:20:37.813 00:20:37.813 Latency(us) 00:20:37.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.813 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:37.813 Verification LBA range: start 0x0 length 0x4000 00:20:37.813 NVMe0n1 : 1.05 10402.76 40.64 0.00 0.00 11801.78 2308.01 41715.09 00:20:37.813 =================================================================================================================== 00:20:37.813 Total : 10402.76 40.64 0.00 0.00 11801.78 2308.01 41715.09 00:20:37.813 03:15:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.813 03:15:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:38.072 03:15:09 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.331 03:15:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.331 03:15:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:38.590 03:15:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.590 03:15:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1112021 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1112021 ']' 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1112021 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1112021 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1112021' 00:20:41.879 killing process with pid 1112021 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1112021 00:20:41.879 03:15:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1112021 00:20:42.137 03:15:13 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:42.137 03:15:13 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.397 rmmod nvme_tcp 00:20:42.397 rmmod nvme_fabrics 00:20:42.397 rmmod nvme_keyring 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1108835 ']' 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1108835 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1108835 ']' 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1108835 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1108835 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1108835' 00:20:42.397 killing process with pid 1108835 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1108835 00:20:42.397 [2024-05-15 03:15:13.429354] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:42.397 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1108835 00:20:42.655 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.655 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.656 03:15:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.578 03:15:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.578 00:20:44.578 real 0m38.398s 00:20:44.578 user 2m5.261s 00:20:44.578 sys 0m7.110s 00:20:44.578 03:15:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:44.578 03:15:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:44.578 ************************************ 00:20:44.578 END TEST nvmf_failover 00:20:44.578 ************************************ 00:20:44.837 03:15:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:44.837 03:15:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:44.837 03:15:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:44.837 03:15:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:44.837 ************************************ 00:20:44.837 START TEST nvmf_host_discovery 00:20:44.837 ************************************ 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:44.837 * Looking for test storage... 00:20:44.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.837 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.838 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.838 03:15:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.838 03:15:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.121 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.122 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.122 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.122 03:15:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:20:50.122 00:20:50.122 --- 10.0.0.2 ping statistics --- 00:20:50.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.122 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:20:50.122 00:20:50.122 --- 10.0.0.1 ping statistics --- 00:20:50.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.122 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1117746 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1117746 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1117746 ']' 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.122 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.122 [2024-05-15 03:15:21.103867] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:50.122 [2024-05-15 03:15:21.103909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.122 [2024-05-15 03:15:21.159425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.122 [2024-05-15 03:15:21.236876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.122 [2024-05-15 03:15:21.236911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.122 [2024-05-15 03:15:21.236922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.122 [2024-05-15 03:15:21.236928] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.122 [2024-05-15 03:15:21.236932] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.122 [2024-05-15 03:15:21.236953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 [2024-05-15 03:15:21.943488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 [2024-05-15 03:15:21.951463] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:51.074 [2024-05-15 03:15:21.951666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 null0 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 null1 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1117992 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1117992 /tmp/host.sock 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1117992 ']' 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:51.074 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.074 03:15:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 [2024-05-15 03:15:22.024547] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:20:51.074 [2024-05-15 03:15:22.024586] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117992 ] 00:20:51.074 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.074 [2024-05-15 03:15:22.077230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.074 [2024-05-15 03:15:22.149454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.010 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:52.011 03:15:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.011 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 [2024-05-15 03:15:23.174842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:20:52.269 03:15:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:20:52.836 [2024-05-15 03:15:23.893067] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:52.836 [2024-05-15 03:15:23.893090] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:52.836 [2024-05-15 03:15:23.893103] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:53.094 [2024-05-15 03:15:24.022503] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:53.094 [2024-05-15 03:15:24.205786] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:53.094 [2024-05-15 03:15:24.205805] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.354 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:53.613 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.614 [2024-05-15 03:15:24.682905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:53.614 [2024-05-15 03:15:24.683671] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:53.614 [2024-05-15 03:15:24.683693] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:53.614 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.873 [2024-05-15 03:15:24.811085] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:20:53.873 03:15:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:20:53.873 [2024-05-15 03:15:24.909763] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:53.873 [2024-05-15 03:15:24.909780] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:53.873 [2024-05-15 03:15:24.909785] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.811 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 [2024-05-15 03:15:25.938750] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:54.812 [2024-05-15 03:15:25.938771] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:54.812 [2024-05-15 03:15:25.947867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.812 [2024-05-15 03:15:25.947884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.812 [2024-05-15 03:15:25.947892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.812 [2024-05-15 03:15:25.947899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.812 [2024-05-15 03:15:25.947906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.812 [2024-05-15 03:15:25.947912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.812 [2024-05-15 03:15:25.947919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.812 [2024-05-15 03:15:25.947925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.812 [2024-05-15 03:15:25.947932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:54.812 [2024-05-15 03:15:25.957881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:54.812 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.812 [2024-05-15 03:15:25.967922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:54.812 [2024-05-15 03:15:25.968142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.812 [2024-05-15 03:15:25.968325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.812 [2024-05-15 03:15:25.968336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:54.812 [2024-05-15 03:15:25.968343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:54.812 [2024-05-15 03:15:25.968355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:54.812 [2024-05-15 03:15:25.968365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:54.812 [2024-05-15 03:15:25.968371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:54.812 [2024-05-15 03:15:25.968378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:54.812 [2024-05-15 03:15:25.968389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.072 [2024-05-15 03:15:25.977977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:55.072 [2024-05-15 03:15:25.978236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.072 [2024-05-15 03:15:25.978369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.072 [2024-05-15 03:15:25.978379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:55.072 [2024-05-15 03:15:25.978386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:55.072 [2024-05-15 03:15:25.978397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:55.073 [2024-05-15 03:15:25.978406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:55.073 [2024-05-15 03:15:25.978412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:55.073 [2024-05-15 03:15:25.978419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:55.073 [2024-05-15 03:15:25.978428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.073 [2024-05-15 03:15:25.988027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:55.073 [2024-05-15 03:15:25.988231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:25.988336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:25.988346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:55.073 [2024-05-15 03:15:25.988354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:55.073 [2024-05-15 03:15:25.988364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:55.073 [2024-05-15 03:15:25.988374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:55.073 [2024-05-15 03:15:25.988380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:55.073 [2024-05-15 03:15:25.988386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:55.073 [2024-05-15 03:15:25.988395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:55.073 [2024-05-15 03:15:25.998080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:55.073 [2024-05-15 03:15:25.998291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:55.073 [2024-05-15 03:15:25.998475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:25.998486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:55.073 [2024-05-15 03:15:25.998493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:55.073 [2024-05-15 03:15:25.998503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:55.073 [2024-05-15 03:15:25.998512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:55.073 [2024-05-15 03:15:25.998518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:55.073 [2024-05-15 03:15:25.998524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:55.073 [2024-05-15 03:15:25.998533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:55.073 03:15:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 [2024-05-15 03:15:26.008129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:55.073 [2024-05-15 03:15:26.008405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:26.008606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:26.008618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:55.073 [2024-05-15 03:15:26.008625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:55.073 [2024-05-15 03:15:26.008636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:55.073 [2024-05-15 03:15:26.008646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:55.073 [2024-05-15 03:15:26.008652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:55.073 [2024-05-15 03:15:26.008659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:55.073 [2024-05-15 03:15:26.008668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.073 [2024-05-15 03:15:26.018185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:55.073 [2024-05-15 03:15:26.018453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:26.018634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.073 [2024-05-15 03:15:26.018648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x950130 with addr=10.0.0.2, port=4420 00:20:55.073 [2024-05-15 03:15:26.018655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x950130 is same with the state(5) to be set 00:20:55.073 [2024-05-15 03:15:26.018665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950130 (9): Bad file descriptor 00:20:55.073 [2024-05-15 03:15:26.018674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:55.073 [2024-05-15 03:15:26.018680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:55.073 [2024-05-15 03:15:26.018686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:55.073 [2024-05-15 03:15:26.018695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:55.073 [2024-05-15 03:15:26.025027] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:55.073 [2024-05-15 03:15:26.025043] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:55.073 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:55.074 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.333 03:15:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.268 [2024-05-15 03:15:27.321226] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:56.268 [2024-05-15 03:15:27.321244] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:56.268 [2024-05-15 03:15:27.321254] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:56.268 [2024-05-15 03:15:27.408519] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:56.527 [2024-05-15 03:15:27.467725] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:56.527 [2024-05-15 03:15:27.467753] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 request: 00:20:56.527 { 00:20:56.527 "name": "nvme", 00:20:56.527 "trtype": "tcp", 00:20:56.527 "traddr": "10.0.0.2", 00:20:56.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:56.527 "adrfam": "ipv4", 00:20:56.527 "trsvcid": "8009", 00:20:56.527 "wait_for_attach": true, 00:20:56.527 "method": "bdev_nvme_start_discovery", 00:20:56.527 "req_id": 1 00:20:56.527 } 00:20:56.527 Got JSON-RPC error response 00:20:56.527 response: 00:20:56.527 { 00:20:56.527 "code": -17, 00:20:56.527 "message": "File exists" 00:20:56.527 } 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 request: 00:20:56.527 { 00:20:56.527 "name": "nvme_second", 00:20:56.527 "trtype": "tcp", 00:20:56.527 "traddr": "10.0.0.2", 00:20:56.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:56.527 "adrfam": "ipv4", 00:20:56.527 "trsvcid": "8009", 00:20:56.527 "wait_for_attach": true, 00:20:56.527 "method": "bdev_nvme_start_discovery", 00:20:56.527 "req_id": 1 00:20:56.527 } 00:20:56.527 Got JSON-RPC error response 00:20:56.527 response: 00:20:56.527 { 00:20:56.527 "code": -17, 00:20:56.527 "message": "File exists" 00:20:56.527 } 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.527 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.786 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:56.787 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.787 03:15:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.723 [2024-05-15 03:15:28.699916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.723 [2024-05-15 03:15:28.700163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.723 [2024-05-15 03:15:28.700175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9480b0 with addr=10.0.0.2, port=8010 00:20:57.723 [2024-05-15 03:15:28.700187] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:57.723 [2024-05-15 03:15:28.700199] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:57.723 [2024-05-15 03:15:28.700205] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:58.657 [2024-05-15 03:15:29.702481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.657 [2024-05-15 03:15:29.702742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.657 [2024-05-15 03:15:29.702752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9480b0 with addr=10.0.0.2, port=8010 00:20:58.657 [2024-05-15 03:15:29.702763] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:58.657 [2024-05-15 03:15:29.702769] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:58.657 [2024-05-15 03:15:29.702774] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:59.594 [2024-05-15 03:15:30.704641] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:59.594 request: 00:20:59.594 { 00:20:59.594 "name": "nvme_second", 00:20:59.594 "trtype": "tcp", 00:20:59.594 "traddr": "10.0.0.2", 00:20:59.594 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:59.594 "adrfam": "ipv4", 00:20:59.594 "trsvcid": "8010", 00:20:59.594 "attach_timeout_ms": 3000, 00:20:59.594 "method": "bdev_nvme_start_discovery", 00:20:59.594 "req_id": 1 00:20:59.594 } 00:20:59.594 Got JSON-RPC error response 00:20:59.594 response: 00:20:59.594 { 00:20:59.594 "code": -110, 00:20:59.594 "message": "Connection timed out" 00:20:59.594 } 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:59.594 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1117992 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.853 rmmod nvme_tcp 00:20:59.853 rmmod nvme_fabrics 00:20:59.853 rmmod nvme_keyring 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1117746 ']' 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1117746 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1117746 ']' 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1117746 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1117746 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1117746' 00:20:59.853 killing process with pid 1117746 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1117746 00:20:59.853 [2024-05-15 03:15:30.878401] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:59.853 03:15:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1117746 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.111 03:15:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.019 03:15:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:02.019 00:21:02.019 real 0m17.360s 00:21:02.019 user 0m22.003s 00:21:02.019 sys 0m5.190s 00:21:02.019 03:15:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:02.019 03:15:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.019 ************************************ 00:21:02.019 END TEST nvmf_host_discovery 00:21:02.019 ************************************ 00:21:02.019 03:15:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:02.019 03:15:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:02.019 03:15:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:02.019 03:15:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.279 ************************************ 00:21:02.279 START TEST nvmf_host_multipath_status 00:21:02.279 ************************************ 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:02.279 * Looking for test storage... 00:21:02.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.279 03:15:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.552 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.552 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.552 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.553 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.553 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:21:07.553 00:21:07.553 --- 10.0.0.2 ping statistics --- 00:21:07.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.553 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:21:07.553 00:21:07.553 --- 10.0.0.1 ping statistics --- 00:21:07.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.553 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1122958 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1122958 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1122958 ']' 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:07.553 03:15:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:07.553 [2024-05-15 03:15:38.570116] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:07.553 [2024-05-15 03:15:38.570157] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.553 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.553 [2024-05-15 03:15:38.626413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:07.553 [2024-05-15 03:15:38.705537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.553 [2024-05-15 03:15:38.705572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.553 [2024-05-15 03:15:38.705579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.553 [2024-05-15 03:15:38.705585] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.553 [2024-05-15 03:15:38.705590] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.553 [2024-05-15 03:15:38.705646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.553 [2024-05-15 03:15:38.705648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1122958 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:08.490 [2024-05-15 03:15:39.558376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.490 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:08.749 Malloc0 00:21:08.749 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:09.007 03:15:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.007 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.265 [2024-05-15 03:15:40.289332] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:09.265 [2024-05-15 03:15:40.289565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.265 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.523 [2024-05-15 03:15:40.466029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1123334 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1123334 /var/tmp/bdevperf.sock 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1123334 ']' 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.523 03:15:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:10.459 03:15:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:10.459 03:15:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:21:10.459 03:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:10.459 03:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:10.717 Nvme0n1 00:21:10.717 03:15:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:11.322 Nvme0n1 00:21:11.322 03:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:11.322 03:15:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:13.225 03:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:13.225 03:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:13.483 03:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:13.483 03:15:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:14.858 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.859 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:14.859 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.859 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:14.859 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.859 03:15:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:15.117 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.117 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:15.117 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.117 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:15.375 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.375 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:15.376 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.634 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.634 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:15.634 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:15.892 03:15:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:16.150 03:15:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:17.086 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:17.087 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:17.087 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.087 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:17.344 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.344 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:17.344 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.345 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:17.345 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.345 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:17.345 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.345 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:17.603 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.603 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:17.603 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.603 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.861 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.861 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:17.861 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.861 03:15:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:18.120 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:18.379 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:18.636 03:15:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:19.572 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:19.572 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:19.572 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.572 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.830 03:15:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:20.089 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.089 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:20.089 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:20.089 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.348 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.348 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:20.348 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.348 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:20.606 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:20.865 03:15:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:21.123 03:15:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:22.057 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:22.057 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:22.057 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.057 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:22.316 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.316 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:22.316 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.316 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.573 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:22.831 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.831 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:22.831 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.831 03:15:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:23.089 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.089 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:23.089 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.089 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:23.348 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:23.348 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:23.348 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:23.348 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:23.607 03:15:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:24.543 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:24.543 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:24.543 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.543 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:24.801 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:24.801 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:24.801 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.801 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:25.058 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.058 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:25.058 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.058 03:15:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:25.058 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.058 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:25.058 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.058 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.316 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.316 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:25.316 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.316 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:25.575 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:25.833 03:15:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:26.092 03:15:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:27.029 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:27.029 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:27.029 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.029 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.287 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:27.545 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.545 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:27.545 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.545 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:27.804 03:15:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.062 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.062 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:28.321 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:28.321 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:28.610 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.610 03:15:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.986 03:16:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.986 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.986 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.986 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.986 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:30.244 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.244 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:30.244 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:30.244 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.503 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.503 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:30.503 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.503 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:30.761 03:16:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:31.019 03:16:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:31.279 03:16:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:32.215 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:32.215 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:32.215 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.215 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.474 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.732 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:32.732 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.732 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:32.732 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.732 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:32.991 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.991 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:32.991 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.991 03:16:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:33.250 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:33.508 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:33.767 03:16:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:34.704 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:34.704 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:34.704 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.704 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:34.962 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.962 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:34.962 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:34.962 03:16:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.220 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.220 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.221 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:35.479 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.479 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:35.479 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.479 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:35.738 03:16:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:35.997 03:16:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:36.256 03:16:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:37.193 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:37.193 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:37.193 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.193 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:37.452 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.452 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:37.452 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.452 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.711 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:37.970 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.970 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:37.970 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.970 03:16:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1123334 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1123334 ']' 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1123334 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1123334 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1123334' 00:21:38.229 killing process with pid 1123334 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1123334 00:21:38.229 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1123334 00:21:38.491 Connection closed with partial response: 00:21:38.491 00:21:38.491 00:21:38.492 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1123334 00:21:38.492 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.492 [2024-05-15 03:15:40.522378] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:38.492 [2024-05-15 03:15:40.522430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123334 ] 00:21:38.492 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.492 [2024-05-15 03:15:40.571957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.492 [2024-05-15 03:15:40.649685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.492 Running I/O for 90 seconds... 00:21:38.492 [2024-05-15 03:15:54.415187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.415610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.415617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.416804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.416825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.416844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.416865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.416985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.416998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.492 [2024-05-15 03:15:54.417233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.492 [2024-05-15 03:15:54.417651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:38.492 [2024-05-15 03:15:54.417665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.417983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.417996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.418954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:15:54.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.493 [2024-05-15 03:15:54.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:15:54.419340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:15:54.419346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:16:07.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:16:07.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:16:07.214834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:16:07.214853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.493 [2024-05-15 03:16:07.214872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:38.493 [2024-05-15 03:16:07.214888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.214908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.214927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.214946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.214964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.214984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.214991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.215296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.215315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.215334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.216290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.216359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.494 [2024-05-15 03:16:07.216366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.217401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.217418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:38.494 [2024-05-15 03:16:07.217457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.494 [2024-05-15 03:16:07.217470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:38.494 Received shutdown signal, test time was about 26.998626 seconds 00:21:38.494 00:21:38.494 Latency(us) 00:21:38.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.494 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.494 Verification LBA range: start 0x0 length 0x4000 00:21:38.494 Nvme0n1 : 27.00 10155.77 39.67 0.00 0.00 12582.22 463.03 3019898.88 00:21:38.494 =================================================================================================================== 00:21:38.494 Total : 10155.77 39.67 0.00 0.00 12582.22 463.03 3019898.88 00:21:38.494 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.753 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:38.753 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.753 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:38.753 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.753 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.754 rmmod nvme_tcp 00:21:38.754 rmmod nvme_fabrics 00:21:38.754 rmmod nvme_keyring 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1122958 ']' 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1122958 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1122958 ']' 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1122958 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1122958 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1122958' 00:21:38.754 killing process with pid 1122958 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1122958 00:21:38.754 [2024-05-15 03:16:09.906555] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:38.754 03:16:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1122958 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.012 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.013 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.013 03:16:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.548 03:16:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.548 00:21:41.548 real 0m38.991s 00:21:41.548 user 1m46.095s 00:21:41.548 sys 0m10.224s 00:21:41.548 03:16:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:41.548 03:16:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:41.548 ************************************ 00:21:41.548 END TEST nvmf_host_multipath_status 00:21:41.548 ************************************ 00:21:41.548 03:16:12 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:41.548 03:16:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:41.548 03:16:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:41.548 03:16:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.548 ************************************ 00:21:41.548 START TEST nvmf_discovery_remove_ifc 00:21:41.548 ************************************ 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:41.548 * Looking for test storage... 00:21:41.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.548 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.549 03:16:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:46.825 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:46.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:46.825 Found net devices under 0000:86:00.0: cvl_0_0 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:46.825 Found net devices under 0000:86:00.1: cvl_0_1 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.825 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:21:46.826 00:21:46.826 --- 10.0.0.2 ping statistics --- 00:21:46.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.826 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:21:46.826 00:21:46.826 --- 10.0.0.1 ping statistics --- 00:21:46.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.826 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1131642 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1131642 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1131642 ']' 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:46.826 03:16:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.826 [2024-05-15 03:16:17.737617] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:46.826 [2024-05-15 03:16:17.737661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.826 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.826 [2024-05-15 03:16:17.795919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.826 [2024-05-15 03:16:17.867631] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.826 [2024-05-15 03:16:17.867672] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.826 [2024-05-15 03:16:17.867679] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.826 [2024-05-15 03:16:17.867686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.826 [2024-05-15 03:16:17.867695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.826 [2024-05-15 03:16:17.867715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.392 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.392 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:21:47.392 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.392 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.392 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.649 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.649 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:47.649 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.649 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.649 [2024-05-15 03:16:18.580017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.649 [2024-05-15 03:16:18.587988] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:47.649 [2024-05-15 03:16:18.588190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:47.649 null0 00:21:47.649 [2024-05-15 03:16:18.620154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1131890 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1131890 /tmp/host.sock 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1131890 ']' 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:47.650 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.650 03:16:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.650 [2024-05-15 03:16:18.687217] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:21:47.650 [2024-05-15 03:16:18.687257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131890 ] 00:21:47.650 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.650 [2024-05-15 03:16:18.739992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.907 [2024-05-15 03:16:18.821605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.473 03:16:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 [2024-05-15 03:16:20.649057] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:49.920 [2024-05-15 03:16:20.649085] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:49.920 [2024-05-15 03:16:20.649100] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.920 [2024-05-15 03:16:20.777487] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:49.920 [2024-05-15 03:16:20.839357] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:49.920 [2024-05-15 03:16:20.839401] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:49.920 [2024-05-15 03:16:20.839421] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:49.920 [2024-05-15 03:16:20.839434] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.920 [2024-05-15 03:16:20.839454] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 [2024-05-15 03:16:20.847286] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11477f0 was disconnected and freed. delete nvme_qpair. 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.920 03:16:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:49.920 03:16:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.920 03:16:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:49.920 03:16:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:51.292 03:16:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:52.225 03:16:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:53.156 03:16:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:54.089 03:16:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:55.461 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.461 [2024-05-15 03:16:26.281278] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:55.461 [2024-05-15 03:16:26.281316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.461 [2024-05-15 03:16:26.281326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.461 [2024-05-15 03:16:26.281351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.461 [2024-05-15 03:16:26.281358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.461 [2024-05-15 03:16:26.281365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.461 [2024-05-15 03:16:26.281372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.462 [2024-05-15 03:16:26.281379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.462 [2024-05-15 03:16:26.281390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.462 [2024-05-15 03:16:26.281398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.462 [2024-05-15 03:16:26.281404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.462 [2024-05-15 03:16:26.281411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e920 is same with the state(5) to be set 00:21:55.462 [2024-05-15 03:16:26.291300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110e920 (9): Bad file descriptor 00:21:55.462 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:55.462 03:16:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:55.462 [2024-05-15 03:16:26.301339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:56.397 03:16:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:56.397 [2024-05-15 03:16:27.341548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:57.330 [2024-05-15 03:16:28.365554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:57.330 [2024-05-15 03:16:28.365603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110e920 with addr=10.0.0.2, port=4420 00:21:57.330 [2024-05-15 03:16:28.365628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e920 is same with the state(5) to be set 00:21:57.330 [2024-05-15 03:16:28.366058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110e920 (9): Bad file descriptor 00:21:57.330 [2024-05-15 03:16:28.366087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.330 [2024-05-15 03:16:28.366112] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:57.330 [2024-05-15 03:16:28.366139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.330 [2024-05-15 03:16:28.366151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.330 [2024-05-15 03:16:28.366163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.330 [2024-05-15 03:16:28.366173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.330 [2024-05-15 03:16:28.366183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.330 [2024-05-15 03:16:28.366193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.330 [2024-05-15 03:16:28.366203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.330 [2024-05-15 03:16:28.366212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.330 [2024-05-15 03:16:28.366222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.330 [2024-05-15 03:16:28.366231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.330 [2024-05-15 03:16:28.366240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:57.330 [2024-05-15 03:16:28.366655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110dd50 (9): Bad file descriptor 00:21:57.330 [2024-05-15 03:16:28.367668] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:57.330 [2024-05-15 03:16:28.367682] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:57.330 03:16:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.330 03:16:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:57.330 03:16:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:58.269 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:58.526 03:16:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:59.460 [2024-05-15 03:16:30.423996] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.460 [2024-05-15 03:16:30.424018] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.460 [2024-05-15 03:16:30.424031] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.460 [2024-05-15 03:16:30.550410] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.460 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.460 [2024-05-15 03:16:30.605547] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:59.460 [2024-05-15 03:16:30.605582] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:59.460 [2024-05-15 03:16:30.605600] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:59.460 [2024-05-15 03:16:30.605613] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:59.460 [2024-05-15 03:16:30.605621] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.460 [2024-05-15 03:16:30.612330] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10fc2d0 was disconnected and freed. delete nvme_qpair. 00:21:59.461 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:59.461 03:16:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1131890 ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1131890' 00:22:00.832 killing process with pid 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1131890 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.832 rmmod nvme_tcp 00:22:00.832 rmmod nvme_fabrics 00:22:00.832 rmmod nvme_keyring 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1131642 ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1131642 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1131642 ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1131642 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.832 03:16:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1131642 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1131642' 00:22:01.091 killing process with pid 1131642 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1131642 00:22:01.091 [2024-05-15 03:16:32.020164] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1131642 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.091 03:16:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.623 03:16:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:03.623 00:22:03.623 real 0m22.020s 00:22:03.623 user 0m27.619s 00:22:03.623 sys 0m5.333s 00:22:03.623 03:16:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:03.623 03:16:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.623 ************************************ 00:22:03.623 END TEST nvmf_discovery_remove_ifc 00:22:03.623 ************************************ 00:22:03.623 03:16:34 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:03.623 03:16:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:03.623 03:16:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:03.623 03:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.623 ************************************ 00:22:03.623 START TEST nvmf_identify_kernel_target 00:22:03.623 ************************************ 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:03.623 * Looking for test storage... 00:22:03.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.623 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.624 03:16:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.889 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.889 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.889 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.889 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:22:08.889 00:22:08.889 --- 10.0.0.2 ping statistics --- 00:22:08.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.889 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:08.889 00:22:08.889 --- 10.0.0.1 ping statistics --- 00:22:08.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.889 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.889 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:08.890 03:16:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:11.415 Waiting for block devices as requested 00:22:11.416 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:11.416 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:11.416 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:11.416 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:11.416 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:11.416 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:11.674 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:11.674 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:11.674 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:11.674 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:11.932 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:11.932 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:11.932 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:12.191 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:12.191 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:12.191 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:12.191 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:12.448 No valid GPT data, bailing 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:12.448 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:22:12.449 00:22:12.449 Discovery Log Number of Records 2, Generation counter 2 00:22:12.449 =====Discovery Log Entry 0====== 00:22:12.449 trtype: tcp 00:22:12.449 adrfam: ipv4 00:22:12.449 subtype: current discovery subsystem 00:22:12.449 treq: not specified, sq flow control disable supported 00:22:12.449 portid: 1 00:22:12.449 trsvcid: 4420 00:22:12.449 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:12.449 traddr: 10.0.0.1 00:22:12.449 eflags: none 00:22:12.449 sectype: none 00:22:12.449 =====Discovery Log Entry 1====== 00:22:12.449 trtype: tcp 00:22:12.449 adrfam: ipv4 00:22:12.449 subtype: nvme subsystem 00:22:12.449 treq: not specified, sq flow control disable supported 00:22:12.449 portid: 1 00:22:12.449 trsvcid: 4420 00:22:12.449 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:12.449 traddr: 10.0.0.1 00:22:12.449 eflags: none 00:22:12.449 sectype: none 00:22:12.449 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:12.449 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:12.739 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.739 ===================================================== 00:22:12.739 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:12.739 ===================================================== 00:22:12.739 Controller Capabilities/Features 00:22:12.739 ================================ 00:22:12.739 Vendor ID: 0000 00:22:12.739 Subsystem Vendor ID: 0000 00:22:12.739 Serial Number: f2adeda1a492cf0334ab 00:22:12.739 Model Number: Linux 00:22:12.739 Firmware Version: 6.7.0-68 00:22:12.739 Recommended Arb Burst: 0 00:22:12.739 IEEE OUI Identifier: 00 00 00 00:22:12.739 Multi-path I/O 00:22:12.739 May have multiple subsystem ports: No 00:22:12.739 May have multiple controllers: No 00:22:12.739 Associated with SR-IOV VF: No 00:22:12.739 Max Data Transfer Size: Unlimited 00:22:12.739 Max Number of Namespaces: 0 00:22:12.739 Max Number of I/O Queues: 1024 00:22:12.739 NVMe Specification Version (VS): 1.3 00:22:12.739 NVMe Specification Version (Identify): 1.3 00:22:12.739 Maximum Queue Entries: 1024 00:22:12.739 Contiguous Queues Required: No 00:22:12.739 Arbitration Mechanisms Supported 00:22:12.739 Weighted Round Robin: Not Supported 00:22:12.739 Vendor Specific: Not Supported 00:22:12.739 Reset Timeout: 7500 ms 00:22:12.739 Doorbell Stride: 4 bytes 00:22:12.739 NVM Subsystem Reset: Not Supported 00:22:12.739 Command Sets Supported 00:22:12.739 NVM Command Set: Supported 00:22:12.739 Boot Partition: Not Supported 00:22:12.739 Memory Page Size Minimum: 4096 bytes 00:22:12.739 Memory Page Size Maximum: 4096 bytes 00:22:12.739 Persistent Memory Region: Not Supported 00:22:12.739 Optional Asynchronous Events Supported 00:22:12.739 Namespace Attribute Notices: Not Supported 00:22:12.739 Firmware Activation Notices: Not Supported 00:22:12.739 ANA Change Notices: Not Supported 00:22:12.739 PLE Aggregate Log Change Notices: Not Supported 00:22:12.739 LBA Status Info Alert Notices: Not Supported 00:22:12.739 EGE Aggregate Log Change Notices: Not Supported 00:22:12.739 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.739 Zone Descriptor Change Notices: Not Supported 00:22:12.739 Discovery Log Change Notices: Supported 00:22:12.739 Controller Attributes 00:22:12.739 128-bit Host Identifier: Not Supported 00:22:12.739 Non-Operational Permissive Mode: Not Supported 00:22:12.739 NVM Sets: Not Supported 00:22:12.739 Read Recovery Levels: Not Supported 00:22:12.739 Endurance Groups: Not Supported 00:22:12.739 Predictable Latency Mode: Not Supported 00:22:12.739 Traffic Based Keep ALive: Not Supported 00:22:12.739 Namespace Granularity: Not Supported 00:22:12.739 SQ Associations: Not Supported 00:22:12.739 UUID List: Not Supported 00:22:12.739 Multi-Domain Subsystem: Not Supported 00:22:12.739 Fixed Capacity Management: Not Supported 00:22:12.739 Variable Capacity Management: Not Supported 00:22:12.739 Delete Endurance Group: Not Supported 00:22:12.739 Delete NVM Set: Not Supported 00:22:12.739 Extended LBA Formats Supported: Not Supported 00:22:12.739 Flexible Data Placement Supported: Not Supported 00:22:12.739 00:22:12.739 Controller Memory Buffer Support 00:22:12.739 ================================ 00:22:12.739 Supported: No 00:22:12.740 00:22:12.740 Persistent Memory Region Support 00:22:12.740 ================================ 00:22:12.740 Supported: No 00:22:12.740 00:22:12.740 Admin Command Set Attributes 00:22:12.740 ============================ 00:22:12.740 Security Send/Receive: Not Supported 00:22:12.740 Format NVM: Not Supported 00:22:12.740 Firmware Activate/Download: Not Supported 00:22:12.740 Namespace Management: Not Supported 00:22:12.740 Device Self-Test: Not Supported 00:22:12.740 Directives: Not Supported 00:22:12.740 NVMe-MI: Not Supported 00:22:12.740 Virtualization Management: Not Supported 00:22:12.740 Doorbell Buffer Config: Not Supported 00:22:12.740 Get LBA Status Capability: Not Supported 00:22:12.740 Command & Feature Lockdown Capability: Not Supported 00:22:12.740 Abort Command Limit: 1 00:22:12.740 Async Event Request Limit: 1 00:22:12.740 Number of Firmware Slots: N/A 00:22:12.740 Firmware Slot 1 Read-Only: N/A 00:22:12.740 Firmware Activation Without Reset: N/A 00:22:12.740 Multiple Update Detection Support: N/A 00:22:12.740 Firmware Update Granularity: No Information Provided 00:22:12.740 Per-Namespace SMART Log: No 00:22:12.740 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.740 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:12.740 Command Effects Log Page: Not Supported 00:22:12.740 Get Log Page Extended Data: Supported 00:22:12.740 Telemetry Log Pages: Not Supported 00:22:12.740 Persistent Event Log Pages: Not Supported 00:22:12.740 Supported Log Pages Log Page: May Support 00:22:12.740 Commands Supported & Effects Log Page: Not Supported 00:22:12.740 Feature Identifiers & Effects Log Page:May Support 00:22:12.740 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.740 Data Area 4 for Telemetry Log: Not Supported 00:22:12.740 Error Log Page Entries Supported: 1 00:22:12.740 Keep Alive: Not Supported 00:22:12.740 00:22:12.740 NVM Command Set Attributes 00:22:12.740 ========================== 00:22:12.740 Submission Queue Entry Size 00:22:12.740 Max: 1 00:22:12.740 Min: 1 00:22:12.740 Completion Queue Entry Size 00:22:12.740 Max: 1 00:22:12.740 Min: 1 00:22:12.740 Number of Namespaces: 0 00:22:12.740 Compare Command: Not Supported 00:22:12.740 Write Uncorrectable Command: Not Supported 00:22:12.740 Dataset Management Command: Not Supported 00:22:12.740 Write Zeroes Command: Not Supported 00:22:12.740 Set Features Save Field: Not Supported 00:22:12.740 Reservations: Not Supported 00:22:12.740 Timestamp: Not Supported 00:22:12.740 Copy: Not Supported 00:22:12.740 Volatile Write Cache: Not Present 00:22:12.740 Atomic Write Unit (Normal): 1 00:22:12.740 Atomic Write Unit (PFail): 1 00:22:12.740 Atomic Compare & Write Unit: 1 00:22:12.740 Fused Compare & Write: Not Supported 00:22:12.740 Scatter-Gather List 00:22:12.740 SGL Command Set: Supported 00:22:12.740 SGL Keyed: Not Supported 00:22:12.740 SGL Bit Bucket Descriptor: Not Supported 00:22:12.740 SGL Metadata Pointer: Not Supported 00:22:12.740 Oversized SGL: Not Supported 00:22:12.740 SGL Metadata Address: Not Supported 00:22:12.740 SGL Offset: Supported 00:22:12.740 Transport SGL Data Block: Not Supported 00:22:12.740 Replay Protected Memory Block: Not Supported 00:22:12.740 00:22:12.740 Firmware Slot Information 00:22:12.740 ========================= 00:22:12.740 Active slot: 0 00:22:12.740 00:22:12.740 00:22:12.740 Error Log 00:22:12.740 ========= 00:22:12.740 00:22:12.740 Active Namespaces 00:22:12.740 ================= 00:22:12.740 Discovery Log Page 00:22:12.740 ================== 00:22:12.740 Generation Counter: 2 00:22:12.740 Number of Records: 2 00:22:12.740 Record Format: 0 00:22:12.740 00:22:12.740 Discovery Log Entry 0 00:22:12.740 ---------------------- 00:22:12.740 Transport Type: 3 (TCP) 00:22:12.740 Address Family: 1 (IPv4) 00:22:12.740 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:12.740 Entry Flags: 00:22:12.740 Duplicate Returned Information: 0 00:22:12.740 Explicit Persistent Connection Support for Discovery: 0 00:22:12.740 Transport Requirements: 00:22:12.740 Secure Channel: Not Specified 00:22:12.740 Port ID: 1 (0x0001) 00:22:12.740 Controller ID: 65535 (0xffff) 00:22:12.740 Admin Max SQ Size: 32 00:22:12.740 Transport Service Identifier: 4420 00:22:12.740 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:12.740 Transport Address: 10.0.0.1 00:22:12.740 Discovery Log Entry 1 00:22:12.740 ---------------------- 00:22:12.740 Transport Type: 3 (TCP) 00:22:12.740 Address Family: 1 (IPv4) 00:22:12.740 Subsystem Type: 2 (NVM Subsystem) 00:22:12.740 Entry Flags: 00:22:12.740 Duplicate Returned Information: 0 00:22:12.740 Explicit Persistent Connection Support for Discovery: 0 00:22:12.740 Transport Requirements: 00:22:12.740 Secure Channel: Not Specified 00:22:12.740 Port ID: 1 (0x0001) 00:22:12.740 Controller ID: 65535 (0xffff) 00:22:12.740 Admin Max SQ Size: 32 00:22:12.740 Transport Service Identifier: 4420 00:22:12.740 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:12.740 Transport Address: 10.0.0.1 00:22:12.740 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:12.740 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.740 get_feature(0x01) failed 00:22:12.740 get_feature(0x02) failed 00:22:12.740 get_feature(0x04) failed 00:22:12.740 ===================================================== 00:22:12.740 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:12.740 ===================================================== 00:22:12.740 Controller Capabilities/Features 00:22:12.740 ================================ 00:22:12.740 Vendor ID: 0000 00:22:12.740 Subsystem Vendor ID: 0000 00:22:12.740 Serial Number: d9ec5d91becdfb6ef953 00:22:12.740 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:12.740 Firmware Version: 6.7.0-68 00:22:12.740 Recommended Arb Burst: 6 00:22:12.740 IEEE OUI Identifier: 00 00 00 00:22:12.740 Multi-path I/O 00:22:12.740 May have multiple subsystem ports: Yes 00:22:12.740 May have multiple controllers: Yes 00:22:12.740 Associated with SR-IOV VF: No 00:22:12.740 Max Data Transfer Size: Unlimited 00:22:12.740 Max Number of Namespaces: 1024 00:22:12.740 Max Number of I/O Queues: 128 00:22:12.740 NVMe Specification Version (VS): 1.3 00:22:12.740 NVMe Specification Version (Identify): 1.3 00:22:12.740 Maximum Queue Entries: 1024 00:22:12.740 Contiguous Queues Required: No 00:22:12.740 Arbitration Mechanisms Supported 00:22:12.740 Weighted Round Robin: Not Supported 00:22:12.740 Vendor Specific: Not Supported 00:22:12.740 Reset Timeout: 7500 ms 00:22:12.740 Doorbell Stride: 4 bytes 00:22:12.740 NVM Subsystem Reset: Not Supported 00:22:12.740 Command Sets Supported 00:22:12.740 NVM Command Set: Supported 00:22:12.740 Boot Partition: Not Supported 00:22:12.740 Memory Page Size Minimum: 4096 bytes 00:22:12.740 Memory Page Size Maximum: 4096 bytes 00:22:12.740 Persistent Memory Region: Not Supported 00:22:12.740 Optional Asynchronous Events Supported 00:22:12.740 Namespace Attribute Notices: Supported 00:22:12.740 Firmware Activation Notices: Not Supported 00:22:12.740 ANA Change Notices: Supported 00:22:12.740 PLE Aggregate Log Change Notices: Not Supported 00:22:12.740 LBA Status Info Alert Notices: Not Supported 00:22:12.740 EGE Aggregate Log Change Notices: Not Supported 00:22:12.740 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.740 Zone Descriptor Change Notices: Not Supported 00:22:12.740 Discovery Log Change Notices: Not Supported 00:22:12.740 Controller Attributes 00:22:12.740 128-bit Host Identifier: Supported 00:22:12.740 Non-Operational Permissive Mode: Not Supported 00:22:12.740 NVM Sets: Not Supported 00:22:12.740 Read Recovery Levels: Not Supported 00:22:12.740 Endurance Groups: Not Supported 00:22:12.740 Predictable Latency Mode: Not Supported 00:22:12.740 Traffic Based Keep ALive: Supported 00:22:12.740 Namespace Granularity: Not Supported 00:22:12.740 SQ Associations: Not Supported 00:22:12.740 UUID List: Not Supported 00:22:12.740 Multi-Domain Subsystem: Not Supported 00:22:12.740 Fixed Capacity Management: Not Supported 00:22:12.740 Variable Capacity Management: Not Supported 00:22:12.740 Delete Endurance Group: Not Supported 00:22:12.740 Delete NVM Set: Not Supported 00:22:12.740 Extended LBA Formats Supported: Not Supported 00:22:12.740 Flexible Data Placement Supported: Not Supported 00:22:12.740 00:22:12.740 Controller Memory Buffer Support 00:22:12.740 ================================ 00:22:12.740 Supported: No 00:22:12.740 00:22:12.740 Persistent Memory Region Support 00:22:12.740 ================================ 00:22:12.740 Supported: No 00:22:12.740 00:22:12.740 Admin Command Set Attributes 00:22:12.740 ============================ 00:22:12.741 Security Send/Receive: Not Supported 00:22:12.741 Format NVM: Not Supported 00:22:12.741 Firmware Activate/Download: Not Supported 00:22:12.741 Namespace Management: Not Supported 00:22:12.741 Device Self-Test: Not Supported 00:22:12.741 Directives: Not Supported 00:22:12.741 NVMe-MI: Not Supported 00:22:12.741 Virtualization Management: Not Supported 00:22:12.741 Doorbell Buffer Config: Not Supported 00:22:12.741 Get LBA Status Capability: Not Supported 00:22:12.741 Command & Feature Lockdown Capability: Not Supported 00:22:12.741 Abort Command Limit: 4 00:22:12.741 Async Event Request Limit: 4 00:22:12.741 Number of Firmware Slots: N/A 00:22:12.741 Firmware Slot 1 Read-Only: N/A 00:22:12.741 Firmware Activation Without Reset: N/A 00:22:12.741 Multiple Update Detection Support: N/A 00:22:12.741 Firmware Update Granularity: No Information Provided 00:22:12.741 Per-Namespace SMART Log: Yes 00:22:12.741 Asymmetric Namespace Access Log Page: Supported 00:22:12.741 ANA Transition Time : 10 sec 00:22:12.741 00:22:12.741 Asymmetric Namespace Access Capabilities 00:22:12.741 ANA Optimized State : Supported 00:22:12.741 ANA Non-Optimized State : Supported 00:22:12.741 ANA Inaccessible State : Supported 00:22:12.741 ANA Persistent Loss State : Supported 00:22:12.741 ANA Change State : Supported 00:22:12.741 ANAGRPID is not changed : No 00:22:12.741 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:12.741 00:22:12.741 ANA Group Identifier Maximum : 128 00:22:12.741 Number of ANA Group Identifiers : 128 00:22:12.741 Max Number of Allowed Namespaces : 1024 00:22:12.741 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:12.741 Command Effects Log Page: Supported 00:22:12.741 Get Log Page Extended Data: Supported 00:22:12.741 Telemetry Log Pages: Not Supported 00:22:12.741 Persistent Event Log Pages: Not Supported 00:22:12.741 Supported Log Pages Log Page: May Support 00:22:12.741 Commands Supported & Effects Log Page: Not Supported 00:22:12.741 Feature Identifiers & Effects Log Page:May Support 00:22:12.741 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.741 Data Area 4 for Telemetry Log: Not Supported 00:22:12.741 Error Log Page Entries Supported: 128 00:22:12.741 Keep Alive: Supported 00:22:12.741 Keep Alive Granularity: 1000 ms 00:22:12.741 00:22:12.741 NVM Command Set Attributes 00:22:12.741 ========================== 00:22:12.741 Submission Queue Entry Size 00:22:12.741 Max: 64 00:22:12.741 Min: 64 00:22:12.741 Completion Queue Entry Size 00:22:12.741 Max: 16 00:22:12.741 Min: 16 00:22:12.741 Number of Namespaces: 1024 00:22:12.741 Compare Command: Not Supported 00:22:12.741 Write Uncorrectable Command: Not Supported 00:22:12.741 Dataset Management Command: Supported 00:22:12.741 Write Zeroes Command: Supported 00:22:12.741 Set Features Save Field: Not Supported 00:22:12.741 Reservations: Not Supported 00:22:12.741 Timestamp: Not Supported 00:22:12.741 Copy: Not Supported 00:22:12.741 Volatile Write Cache: Present 00:22:12.741 Atomic Write Unit (Normal): 1 00:22:12.741 Atomic Write Unit (PFail): 1 00:22:12.741 Atomic Compare & Write Unit: 1 00:22:12.741 Fused Compare & Write: Not Supported 00:22:12.741 Scatter-Gather List 00:22:12.741 SGL Command Set: Supported 00:22:12.741 SGL Keyed: Not Supported 00:22:12.741 SGL Bit Bucket Descriptor: Not Supported 00:22:12.741 SGL Metadata Pointer: Not Supported 00:22:12.741 Oversized SGL: Not Supported 00:22:12.741 SGL Metadata Address: Not Supported 00:22:12.741 SGL Offset: Supported 00:22:12.741 Transport SGL Data Block: Not Supported 00:22:12.741 Replay Protected Memory Block: Not Supported 00:22:12.741 00:22:12.741 Firmware Slot Information 00:22:12.741 ========================= 00:22:12.741 Active slot: 0 00:22:12.741 00:22:12.741 Asymmetric Namespace Access 00:22:12.741 =========================== 00:22:12.741 Change Count : 0 00:22:12.741 Number of ANA Group Descriptors : 1 00:22:12.741 ANA Group Descriptor : 0 00:22:12.741 ANA Group ID : 1 00:22:12.741 Number of NSID Values : 1 00:22:12.741 Change Count : 0 00:22:12.741 ANA State : 1 00:22:12.741 Namespace Identifier : 1 00:22:12.741 00:22:12.741 Commands Supported and Effects 00:22:12.741 ============================== 00:22:12.741 Admin Commands 00:22:12.741 -------------- 00:22:12.741 Get Log Page (02h): Supported 00:22:12.741 Identify (06h): Supported 00:22:12.741 Abort (08h): Supported 00:22:12.741 Set Features (09h): Supported 00:22:12.741 Get Features (0Ah): Supported 00:22:12.741 Asynchronous Event Request (0Ch): Supported 00:22:12.741 Keep Alive (18h): Supported 00:22:12.741 I/O Commands 00:22:12.741 ------------ 00:22:12.741 Flush (00h): Supported 00:22:12.741 Write (01h): Supported LBA-Change 00:22:12.741 Read (02h): Supported 00:22:12.741 Write Zeroes (08h): Supported LBA-Change 00:22:12.741 Dataset Management (09h): Supported 00:22:12.741 00:22:12.741 Error Log 00:22:12.741 ========= 00:22:12.741 Entry: 0 00:22:12.741 Error Count: 0x3 00:22:12.741 Submission Queue Id: 0x0 00:22:12.741 Command Id: 0x5 00:22:12.741 Phase Bit: 0 00:22:12.741 Status Code: 0x2 00:22:12.741 Status Code Type: 0x0 00:22:12.741 Do Not Retry: 1 00:22:12.741 Error Location: 0x28 00:22:12.741 LBA: 0x0 00:22:12.741 Namespace: 0x0 00:22:12.741 Vendor Log Page: 0x0 00:22:12.741 ----------- 00:22:12.741 Entry: 1 00:22:12.741 Error Count: 0x2 00:22:12.741 Submission Queue Id: 0x0 00:22:12.741 Command Id: 0x5 00:22:12.741 Phase Bit: 0 00:22:12.741 Status Code: 0x2 00:22:12.741 Status Code Type: 0x0 00:22:12.741 Do Not Retry: 1 00:22:12.741 Error Location: 0x28 00:22:12.741 LBA: 0x0 00:22:12.741 Namespace: 0x0 00:22:12.741 Vendor Log Page: 0x0 00:22:12.741 ----------- 00:22:12.741 Entry: 2 00:22:12.741 Error Count: 0x1 00:22:12.741 Submission Queue Id: 0x0 00:22:12.741 Command Id: 0x4 00:22:12.741 Phase Bit: 0 00:22:12.741 Status Code: 0x2 00:22:12.741 Status Code Type: 0x0 00:22:12.741 Do Not Retry: 1 00:22:12.741 Error Location: 0x28 00:22:12.741 LBA: 0x0 00:22:12.741 Namespace: 0x0 00:22:12.741 Vendor Log Page: 0x0 00:22:12.741 00:22:12.741 Number of Queues 00:22:12.741 ================ 00:22:12.741 Number of I/O Submission Queues: 128 00:22:12.741 Number of I/O Completion Queues: 128 00:22:12.741 00:22:12.741 ZNS Specific Controller Data 00:22:12.741 ============================ 00:22:12.741 Zone Append Size Limit: 0 00:22:12.741 00:22:12.741 00:22:12.741 Active Namespaces 00:22:12.741 ================= 00:22:12.741 get_feature(0x05) failed 00:22:12.741 Namespace ID:1 00:22:12.741 Command Set Identifier: NVM (00h) 00:22:12.741 Deallocate: Supported 00:22:12.741 Deallocated/Unwritten Error: Not Supported 00:22:12.741 Deallocated Read Value: Unknown 00:22:12.741 Deallocate in Write Zeroes: Not Supported 00:22:12.741 Deallocated Guard Field: 0xFFFF 00:22:12.741 Flush: Supported 00:22:12.741 Reservation: Not Supported 00:22:12.741 Namespace Sharing Capabilities: Multiple Controllers 00:22:12.741 Size (in LBAs): 1953525168 (931GiB) 00:22:12.741 Capacity (in LBAs): 1953525168 (931GiB) 00:22:12.741 Utilization (in LBAs): 1953525168 (931GiB) 00:22:12.741 UUID: 480931cd-6c65-4d2d-8729-83013a6940a0 00:22:12.741 Thin Provisioning: Not Supported 00:22:12.741 Per-NS Atomic Units: Yes 00:22:12.741 Atomic Boundary Size (Normal): 0 00:22:12.741 Atomic Boundary Size (PFail): 0 00:22:12.741 Atomic Boundary Offset: 0 00:22:12.741 NGUID/EUI64 Never Reused: No 00:22:12.741 ANA group ID: 1 00:22:12.741 Namespace Write Protected: No 00:22:12.741 Number of LBA Formats: 1 00:22:12.741 Current LBA Format: LBA Format #00 00:22:12.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:12.741 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.741 rmmod nvme_tcp 00:22:12.741 rmmod nvme_fabrics 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.741 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.742 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.742 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.742 03:16:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:15.271 03:16:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:17.171 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:17.171 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:18.107 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:18.107 00:22:18.107 real 0m14.818s 00:22:18.107 user 0m3.361s 00:22:18.107 sys 0m7.689s 00:22:18.107 03:16:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:18.107 03:16:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.107 ************************************ 00:22:18.107 END TEST nvmf_identify_kernel_target 00:22:18.107 ************************************ 00:22:18.108 03:16:49 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.108 03:16:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:18.108 03:16:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:18.108 03:16:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.108 ************************************ 00:22:18.108 START TEST nvmf_auth 00:22:18.108 ************************************ 00:22:18.108 03:16:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.366 * Looking for test storage... 00:22:18.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.367 03:16:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.633 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.633 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:23.633 00:22:23.633 --- 10.0.0.2 ping statistics --- 00:22:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.633 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:22:23.633 00:22:23.633 --- 10.0.0.1 ping statistics --- 00:22:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.633 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.633 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=1143667 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 1143667 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 1143667 ']' 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:23.634 03:16:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=1acc8cf2b75e9066cb92f8a0323a2aa9 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.z6I 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 1acc8cf2b75e9066cb92f8a0323a2aa9 0 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 1acc8cf2b75e9066cb92f8a0323a2aa9 0 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=1acc8cf2b75e9066cb92f8a0323a2aa9 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.z6I 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.z6I 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.z6I 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d6d487fe4299db6727c2aff2e8b76dc2f0432fde7325c73c31e6ff846aefe56a 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.0oZ 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d6d487fe4299db6727c2aff2e8b76dc2f0432fde7325c73c31e6ff846aefe56a 3 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d6d487fe4299db6727c2aff2e8b76dc2f0432fde7325c73c31e6ff846aefe56a 3 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d6d487fe4299db6727c2aff2e8b76dc2f0432fde7325c73c31e6ff846aefe56a 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:22:24.568 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.0oZ 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.0oZ 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.0oZ 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9289152495035d544baca70f7370beaecbc73393190f6b47 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.goV 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9289152495035d544baca70f7370beaecbc73393190f6b47 0 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9289152495035d544baca70f7370beaecbc73393190f6b47 0 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9289152495035d544baca70f7370beaecbc73393190f6b47 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.goV 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.goV 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.goV 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.826 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=4d216dd7a22d69a8a7b1352a53122186b1806d5d83417a14 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.9u6 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 4d216dd7a22d69a8a7b1352a53122186b1806d5d83417a14 2 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 4d216dd7a22d69a8a7b1352a53122186b1806d5d83417a14 2 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=4d216dd7a22d69a8a7b1352a53122186b1806d5d83417a14 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.9u6 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.9u6 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.9u6 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=2959b166ef947e881b0e4a5e8e0f381d 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.sar 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 2959b166ef947e881b0e4a5e8e0f381d 1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2959b166ef947e881b0e4a5e8e0f381d 1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2959b166ef947e881b0e4a5e8e0f381d 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.sar 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.sar 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.sar 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9443091cc3192455b7e4428b8b4b4da4 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.ZKr 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9443091cc3192455b7e4428b8b4b4da4 1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9443091cc3192455b7e4428b8b4b4da4 1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9443091cc3192455b7e4428b8b4b4da4 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:22:24.827 03:16:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.ZKr 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.ZKr 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.ZKr 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:25.085 03:16:55 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d87878c285b98af55d231a87e5024caea4986a192c2cd164 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.FXG 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d87878c285b98af55d231a87e5024caea4986a192c2cd164 2 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d87878c285b98af55d231a87e5024caea4986a192c2cd164 2 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d87878c285b98af55d231a87e5024caea4986a192c2cd164 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.FXG 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.FXG 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.FXG 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:25.085 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=fc6d31b80c90b5d798851ffcd577b56c 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.H7n 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key fc6d31b80c90b5d798851ffcd577b56c 0 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 fc6d31b80c90b5d798851ffcd577b56c 0 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=fc6d31b80c90b5d798851ffcd577b56c 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.H7n 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.H7n 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.H7n 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=325894bfc9b7720ae63bb33cba16f3f28f4b22f59b03a47534fb860d692e40f8 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.ZoQ 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 325894bfc9b7720ae63bb33cba16f3f28f4b22f59b03a47534fb860d692e40f8 3 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 325894bfc9b7720ae63bb33cba16f3f28f4b22f59b03a47534fb860d692e40f8 3 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=325894bfc9b7720ae63bb33cba16f3f28f4b22f59b03a47534fb860d692e40f8 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.ZoQ 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.ZoQ 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.ZoQ 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 1143667 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 1143667 ']' 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:25.086 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.z6I 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.0oZ ]] 00:22:25.344 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0oZ 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.goV 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.9u6 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9u6 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.sar 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.ZKr ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZKr 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FXG 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.H7n ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.H7n 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZoQ 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:25.345 03:16:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:27.869 Waiting for block devices as requested 00:22:27.869 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:28.126 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:28.126 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:28.126 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:28.126 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:28.384 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:28.384 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:28.384 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:28.384 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:28.641 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:28.641 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:28.641 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:28.898 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:28.898 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:28.898 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:29.155 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:29.155 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:29.719 No valid GPT data, bailing 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:22:29.719 00:22:29.719 Discovery Log Number of Records 2, Generation counter 2 00:22:29.719 =====Discovery Log Entry 0====== 00:22:29.719 trtype: tcp 00:22:29.719 adrfam: ipv4 00:22:29.719 subtype: current discovery subsystem 00:22:29.719 treq: not specified, sq flow control disable supported 00:22:29.719 portid: 1 00:22:29.719 trsvcid: 4420 00:22:29.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:29.719 traddr: 10.0.0.1 00:22:29.719 eflags: none 00:22:29.719 sectype: none 00:22:29.719 =====Discovery Log Entry 1====== 00:22:29.719 trtype: tcp 00:22:29.719 adrfam: ipv4 00:22:29.719 subtype: nvme subsystem 00:22:29.719 treq: not specified, sq flow control disable supported 00:22:29.719 portid: 1 00:22:29.719 trsvcid: 4420 00:22:29.719 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:29.719 traddr: 10.0.0.1 00:22:29.719 eflags: none 00:22:29.719 sectype: none 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.719 03:17:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:29.977 nvme0n1 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.977 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.235 nvme0n1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.235 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.493 nvme0n1 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.493 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.751 nvme0n1 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:30.751 nvme0n1 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.751 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 03:17:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 nvme0n1 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.022 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.282 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.283 nvme0n1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.283 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.541 nvme0n1 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.541 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.798 nvme0n1 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:22:31.798 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.799 03:17:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.056 nvme0n1 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.056 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.057 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.314 nvme0n1 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.314 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.315 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.573 nvme0n1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.573 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.830 nvme0n1 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:32.830 03:17:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:33.086 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.087 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.344 nvme0n1 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.344 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 nvme0n1 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.602 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.603 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.861 nvme0n1 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.861 03:17:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.465 nvme0n1 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.465 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.466 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.723 nvme0n1 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:34.723 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.724 03:17:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 nvme0n1 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:35.289 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.290 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.290 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.547 nvme0n1 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.547 03:17:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.112 nvme0n1 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.112 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.677 nvme0n1 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:36.677 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.678 03:17:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:37.243 nvme0n1 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.243 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.501 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.066 nvme0n1 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.066 03:17:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.066 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.631 nvme0n1 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:38.631 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.632 03:17:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 nvme0n1 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.197 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.198 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.456 nvme0n1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.456 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.714 nvme0n1 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.714 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 nvme0n1 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.972 03:17:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 nvme0n1 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 nvme0n1 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.231 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.489 nvme0n1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.489 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 nvme0n1 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.747 03:17:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.005 nvme0n1 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.005 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.006 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.264 nvme0n1 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.264 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.523 nvme0n1 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.523 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 nvme0n1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.781 03:17:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.039 nvme0n1 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:42.039 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.040 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.040 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.297 nvme0n1 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.297 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.553 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.554 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 nvme0n1 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.811 03:17:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.070 nvme0n1 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.070 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.635 nvme0n1 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:43.635 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.636 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 nvme0n1 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 03:17:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.894 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 nvme0n1 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.459 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.460 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.718 nvme0n1 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.718 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.976 03:17:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.233 nvme0n1 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.233 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.234 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.799 nvme0n1 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:45.799 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:46.064 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.065 03:17:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.634 nvme0n1 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.634 03:17:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 nvme0n1 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.200 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.765 nvme0n1 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.765 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.766 03:17:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.333 nvme0n1 00:22:48.333 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.333 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.333 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:48.333 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.333 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.592 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.593 nvme0n1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.593 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:48.851 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.852 nvme0n1 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.852 03:17:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.110 nvme0n1 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.110 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.111 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.370 nvme0n1 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.370 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 nvme0n1 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 nvme0n1 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.630 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.889 03:17:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.889 nvme0n1 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.889 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.245 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.246 nvme0n1 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:50.246 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.506 nvme0n1 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.506 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.791 nvme0n1 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.791 03:17:21 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.050 nvme0n1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.050 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.308 nvme0n1 00:22:51.308 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.308 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.308 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:51.308 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.309 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.567 nvme0n1 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.567 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:51.826 nvme0n1 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.826 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.085 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.085 03:17:22 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.085 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.085 03:17:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.085 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.342 nvme0n1 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:52.342 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.343 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 nvme0n1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.600 03:17:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 nvme0n1 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.166 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.424 nvme0n1 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.424 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.682 03:17:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.941 nvme0n1 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.941 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:54.506 nvme0n1 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.506 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MWFjYzhjZjJiNzVlOTA2NmNiOTJmOGEwMzIzYTJhYTmjrc0r: 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZDZkNDg3ZmU0Mjk5ZGI2NzI3YzJhZmYyZThiNzZkYzJmMDQzMmZkZTczMjVjNzNjMzFlNmZmODQ2YWVmZTU2YZ2XFfw=: 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.507 03:17:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.072 nvme0n1 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:55.072 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.073 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.638 nvme0n1 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:Mjk1OWIxNjZlZjk0N2U4ODFiMGU0YTVlOGUwZjM4MWRYR0y+: 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTQ0MzA5MWNjMzE5MjQ1NWI3ZTQ0MjhiOGI0YjRkYTRuiVX0: 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.638 03:17:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:56.204 nvme0n1 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.204 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg3ODc4YzI4NWI5OGFmNTVkMjMxYTg3ZTUwMjRjYWVhNDk4NmExOTJjMmNkMTY0K0PCyA==: 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: ]] 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZmM2ZDMxYjgwYzkwYjVkNzk4ODUxZmZjZDU3N2I1NmOVDyAD: 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:56.462 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.463 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 nvme0n1 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 03:17:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1ODk0YmZjOWI3NzIwYWU2M2JiMzNjYmExNmYzZjI4ZjRiMjJmNTliMDNhNDc1MzRmYjg2MGQ2OTJlNDBmOILDwYM=: 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.031 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.032 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 nvme0n1 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:OTI4OTE1MjQ5NTAzNWQ1NDRiYWNhNzBmNzM3MGJlYWVjYmM3MzM5MzE5MGY2YjQ32tVMag==: 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NGQyMTZkZDdhMjJkNjlhOGE3YjEzNTJhNTMxMjIxODZiMTgwNmQ1ZDgzNDE3YTE0eyDdvw==: 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.598 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.599 request: 00:22:57.599 { 00:22:57.599 "name": "nvme0", 00:22:57.599 "trtype": "tcp", 00:22:57.599 "traddr": "10.0.0.1", 00:22:57.599 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:57.599 "adrfam": "ipv4", 00:22:57.599 "trsvcid": "4420", 00:22:57.599 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:57.599 "method": "bdev_nvme_attach_controller", 00:22:57.599 "req_id": 1 00:22:57.599 } 00:22:57.599 Got JSON-RPC error response 00:22:57.599 response: 00:22:57.599 { 00:22:57.599 "code": -32602, 00:22:57.599 "message": "Invalid parameters" 00:22:57.599 } 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.599 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.858 request: 00:22:57.858 { 00:22:57.858 "name": "nvme0", 00:22:57.858 "trtype": "tcp", 00:22:57.858 "traddr": "10.0.0.1", 00:22:57.858 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:57.858 "adrfam": "ipv4", 00:22:57.858 "trsvcid": "4420", 00:22:57.858 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:57.858 "dhchap_key": "key2", 00:22:57.858 "method": "bdev_nvme_attach_controller", 00:22:57.858 "req_id": 1 00:22:57.858 } 00:22:57.858 Got JSON-RPC error response 00:22:57.858 response: 00:22:57.858 { 00:22:57.858 "code": -32602, 00:22:57.858 "message": "Invalid parameters" 00:22:57.858 } 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:22:57.858 request: 00:22:57.858 { 00:22:57.858 "name": "nvme0", 00:22:57.858 "trtype": "tcp", 00:22:57.858 "traddr": "10.0.0.1", 00:22:57.858 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:57.858 "adrfam": "ipv4", 00:22:57.858 "trsvcid": "4420", 00:22:57.858 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:57.858 "dhchap_key": "key1", 00:22:57.858 "dhchap_ctrlr_key": "ckey2", 00:22:57.858 "method": "bdev_nvme_attach_controller", 00:22:57.858 "req_id": 1 00:22:57.858 } 00:22:57.858 Got JSON-RPC error response 00:22:57.858 response: 00:22:57.858 { 00:22:57.858 "code": -32602, 00:22:57.858 "message": "Invalid parameters" 00:22:57.858 } 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.858 rmmod nvme_tcp 00:22:57.858 rmmod nvme_fabrics 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 1143667 ']' 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 1143667 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 1143667 ']' 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 1143667 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.858 03:17:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1143667 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1143667' 00:22:58.117 killing process with pid 1143667 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 1143667 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 1143667 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.117 03:17:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:00.648 03:17:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:02.550 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:02.550 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:03.486 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:23:03.486 03:17:34 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.z6I /tmp/spdk.key-null.goV /tmp/spdk.key-sha256.sar /tmp/spdk.key-sha384.FXG /tmp/spdk.key-sha512.ZoQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:03.486 03:17:34 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:05.399 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:05.399 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:05.399 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:05.659 00:23:05.659 real 0m47.361s 00:23:05.659 user 0m42.127s 00:23:05.659 sys 0m10.690s 00:23:05.659 03:17:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.659 03:17:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:05.659 ************************************ 00:23:05.659 END TEST nvmf_auth 00:23:05.659 ************************************ 00:23:05.659 03:17:36 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:23:05.659 03:17:36 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:05.659 03:17:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:05.659 03:17:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.659 03:17:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.659 ************************************ 00:23:05.659 START TEST nvmf_digest 00:23:05.659 ************************************ 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:05.659 * Looking for test storage... 00:23:05.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.659 03:17:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.927 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:10.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:10.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:10.928 Found net devices under 0000:86:00.0: cvl_0_0 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:10.928 Found net devices under 0000:86:00.1: cvl_0_1 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.928 03:17:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:10.928 00:23:10.928 --- 10.0.0.2 ping statistics --- 00:23:10.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.928 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:10.928 00:23:10.928 --- 10.0.0.1 ping statistics --- 00:23:10.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.928 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.928 03:17:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:11.187 ************************************ 00:23:11.187 START TEST nvmf_digest_clean 00:23:11.187 ************************************ 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1156670 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1156670 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1156670 ']' 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.187 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:11.187 [2024-05-15 03:17:42.161602] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:11.187 [2024-05-15 03:17:42.161640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.187 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.187 [2024-05-15 03:17:42.211603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.187 [2024-05-15 03:17:42.283187] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.187 [2024-05-15 03:17:42.283225] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.187 [2024-05-15 03:17:42.283231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.187 [2024-05-15 03:17:42.283237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.187 [2024-05-15 03:17:42.283242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.188 [2024-05-15 03:17:42.283280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.126 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.126 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:12.126 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.126 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.126 03:17:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 null0 00:23:12.126 [2024-05-15 03:17:43.091665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.126 [2024-05-15 03:17:43.115684] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:12.126 [2024-05-15 03:17:43.115880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1156757 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1156757 /var/tmp/bperf.sock 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1156757 ']' 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:12.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.126 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 [2024-05-15 03:17:43.165749] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:12.126 [2024-05-15 03:17:43.165790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156757 ] 00:23:12.126 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.126 [2024-05-15 03:17:43.220445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.425 [2024-05-15 03:17:43.301452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.991 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.991 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:12.991 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:12.991 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:12.991 03:17:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:13.249 03:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.249 03:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.506 nvme0n1 00:23:13.506 03:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:13.506 03:17:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:13.764 Running I/O for 2 seconds... 00:23:15.665 00:23:15.665 Latency(us) 00:23:15.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.665 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:15.665 nvme0n1 : 2.00 25085.86 97.99 0.00 0.00 5096.24 2179.78 11397.57 00:23:15.665 =================================================================================================================== 00:23:15.665 Total : 25085.86 97.99 0.00 0.00 5096.24 2179.78 11397.57 00:23:15.665 0 00:23:15.665 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:15.665 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:15.665 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:15.665 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:15.665 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:15.665 | select(.opcode=="crc32c") 00:23:15.665 | "\(.module_name) \(.executed)"' 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1156757 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1156757 ']' 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1156757 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1156757 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1156757' 00:23:15.923 killing process with pid 1156757 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1156757 00:23:15.923 Received shutdown signal, test time was about 2.000000 seconds 00:23:15.923 00:23:15.923 Latency(us) 00:23:15.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.923 =================================================================================================================== 00:23:15.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.923 03:17:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1156757 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:16.181 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1157402 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1157402 /var/tmp/bperf.sock 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1157402 ']' 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:16.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.182 03:17:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:16.182 [2024-05-15 03:17:47.209396] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:16.182 [2024-05-15 03:17:47.209445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157402 ] 00:23:16.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:16.182 Zero copy mechanism will not be used. 00:23:16.182 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.182 [2024-05-15 03:17:47.263298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.182 [2024-05-15 03:17:47.337147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.116 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.682 nvme0n1 00:23:17.682 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:17.682 03:17:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:17.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:17.682 Zero copy mechanism will not be used. 00:23:17.682 Running I/O for 2 seconds... 00:23:19.583 00:23:19.583 Latency(us) 00:23:19.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.583 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:19.583 nvme0n1 : 2.00 4483.41 560.43 0.00 0.00 3566.06 961.67 5869.75 00:23:19.583 =================================================================================================================== 00:23:19.583 Total : 4483.41 560.43 0.00 0.00 3566.06 961.67 5869.75 00:23:19.583 0 00:23:19.583 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:19.583 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:19.583 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:19.583 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:19.583 | select(.opcode=="crc32c") 00:23:19.583 | "\(.module_name) \(.executed)"' 00:23:19.583 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1157402 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1157402 ']' 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1157402 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1157402 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1157402' 00:23:19.841 killing process with pid 1157402 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1157402 00:23:19.841 Received shutdown signal, test time was about 2.000000 seconds 00:23:19.841 00:23:19.841 Latency(us) 00:23:19.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.841 =================================================================================================================== 00:23:19.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.841 03:17:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1157402 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1158097 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1158097 /var/tmp/bperf.sock 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1158097 ']' 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:20.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:20.100 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:20.100 [2024-05-15 03:17:51.179438] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:20.100 [2024-05-15 03:17:51.179490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158097 ] 00:23:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.100 [2024-05-15 03:17:51.232662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.358 [2024-05-15 03:17:51.314047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.925 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.925 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:20.925 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:20.925 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:20.925 03:17:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:21.183 03:17:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.183 03:17:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.441 nvme0n1 00:23:21.441 03:17:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:21.441 03:17:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.441 Running I/O for 2 seconds... 00:23:23.971 00:23:23.971 Latency(us) 00:23:23.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.971 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:23.971 nvme0n1 : 2.00 28519.71 111.41 0.00 0.00 4481.73 1581.41 11739.49 00:23:23.971 =================================================================================================================== 00:23:23.971 Total : 28519.71 111.41 0.00 0.00 4481.73 1581.41 11739.49 00:23:23.971 0 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:23.971 | select(.opcode=="crc32c") 00:23:23.971 | "\(.module_name) \(.executed)"' 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1158097 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1158097 ']' 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1158097 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1158097 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1158097' 00:23:23.971 killing process with pid 1158097 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1158097 00:23:23.971 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.971 00:23:23.971 Latency(us) 00:23:23.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.971 =================================================================================================================== 00:23:23.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.971 03:17:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1158097 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1158788 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1158788 /var/tmp/bperf.sock 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1158788 ']' 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:23.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.971 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:23.971 [2024-05-15 03:17:55.063405] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:23.971 [2024-05-15 03:17:55.063451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158788 ] 00:23:23.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:23.971 Zero copy mechanism will not be used. 00:23:23.971 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.971 [2024-05-15 03:17:55.117757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.229 [2024-05-15 03:17:55.197338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.797 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.797 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:24.797 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:24.797 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:24.797 03:17:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:25.055 03:17:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.055 03:17:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.313 nvme0n1 00:23:25.313 03:17:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:25.313 03:17:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:25.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:25.572 Zero copy mechanism will not be used. 00:23:25.572 Running I/O for 2 seconds... 00:23:27.472 00:23:27.472 Latency(us) 00:23:27.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.472 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:27.472 nvme0n1 : 2.00 5540.83 692.60 0.00 0.00 2883.24 1638.40 5100.41 00:23:27.472 =================================================================================================================== 00:23:27.472 Total : 5540.83 692.60 0.00 0.00 2883.24 1638.40 5100.41 00:23:27.472 0 00:23:27.472 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:27.472 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:27.472 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:27.472 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:27.472 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:27.472 | select(.opcode=="crc32c") 00:23:27.472 | "\(.module_name) \(.executed)"' 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1158788 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1158788 ']' 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1158788 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1158788 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1158788' 00:23:27.730 killing process with pid 1158788 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1158788 00:23:27.730 Received shutdown signal, test time was about 2.000000 seconds 00:23:27.730 00:23:27.730 Latency(us) 00:23:27.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.730 =================================================================================================================== 00:23:27.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.730 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1158788 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1156670 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1156670 ']' 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1156670 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1156670 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1156670' 00:23:27.988 killing process with pid 1156670 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1156670 00:23:27.988 [2024-05-15 03:17:58.992222] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:27.988 03:17:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1156670 00:23:28.246 00:23:28.246 real 0m17.075s 00:23:28.246 user 0m32.956s 00:23:28.246 sys 0m4.275s 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:28.246 ************************************ 00:23:28.246 END TEST nvmf_digest_clean 00:23:28.246 ************************************ 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.246 ************************************ 00:23:28.246 START TEST nvmf_digest_error 00:23:28.246 ************************************ 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1159516 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1159516 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1159516 ']' 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.246 03:17:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:28.246 [2024-05-15 03:17:59.305785] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:28.246 [2024-05-15 03:17:59.305822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.246 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.246 [2024-05-15 03:17:59.362296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.505 [2024-05-15 03:17:59.440418] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.505 [2024-05-15 03:17:59.440452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.505 [2024-05-15 03:17:59.440460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.505 [2024-05-15 03:17:59.440469] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.505 [2024-05-15 03:17:59.440474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.505 [2024-05-15 03:17:59.440508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:29.071 [2024-05-15 03:18:00.170648] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.071 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:29.330 null0 00:23:29.330 [2024-05-15 03:18:00.261291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.330 [2024-05-15 03:18:00.285293] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:29.330 [2024-05-15 03:18:00.285509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1159764 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1159764 /var/tmp/bperf.sock 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1159764 ']' 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:29.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.330 03:18:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:29.330 [2024-05-15 03:18:00.336480] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:29.330 [2024-05-15 03:18:00.336522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159764 ] 00:23:29.330 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.330 [2024-05-15 03:18:00.391174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.330 [2024-05-15 03:18:00.470294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.264 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.264 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:23:30.264 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.265 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.523 nvme0n1 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:30.523 03:18:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:30.781 Running I/O for 2 seconds... 00:23:30.781 [2024-05-15 03:18:01.718516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.718552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.718563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.729280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.729305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.729314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.738978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.739001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.739009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.747209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.747230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.747238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.757671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.757692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.757701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.767380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.767402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.767410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.777605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.785584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.785603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.785611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.795951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.795971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.795979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.805126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.805146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.805155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.815277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.815297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.815305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.823731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.823750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.823758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.836089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.836108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.836119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.844488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.844508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.844516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.856533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.856553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.856562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.864899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.864919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.864927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.876317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.876338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.876346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.885861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.885881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.885889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.895133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.895152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.895160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.904415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.904444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.916093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.916113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.916121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.926728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.926751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.926758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.782 [2024-05-15 03:18:01.935754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:30.782 [2024-05-15 03:18:01.935774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.782 [2024-05-15 03:18:01.935782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.947693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.947712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.947721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.956448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.956473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.956482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.966320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.966339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.976660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.976678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.976686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.986018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.986038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.986046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:01.995454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:01.995478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:01.995486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.008955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:02.008975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:02.008983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.017600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:02.017620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:02.017628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.029568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:02.029588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:02.029595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.042217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:02.042237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:02.042245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.055483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.041 [2024-05-15 03:18:02.055502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.041 [2024-05-15 03:18:02.055510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.041 [2024-05-15 03:18:02.063812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.063831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.063839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.076113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.076133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.076141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.088070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.088089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.088097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.097863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.097883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.097890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.107273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.107293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.107304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.116629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.116648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.116656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.127207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.127226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.127233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.136634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.136653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.136661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.147168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.147187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.147195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.160149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.160181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.170050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.170070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.178964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.178983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.178992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.189208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.189228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.189236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.042 [2024-05-15 03:18:02.198575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.042 [2024-05-15 03:18:02.198595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.042 [2024-05-15 03:18:02.198603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.300 [2024-05-15 03:18:02.207835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.300 [2024-05-15 03:18:02.207854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.300 [2024-05-15 03:18:02.207862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.300 [2024-05-15 03:18:02.217560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.300 [2024-05-15 03:18:02.217578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.300 [2024-05-15 03:18:02.217586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.300 [2024-05-15 03:18:02.226238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.300 [2024-05-15 03:18:02.226257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.300 [2024-05-15 03:18:02.226266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.236719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.236739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.236747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.246814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.246834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.246841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.255387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.255415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.266048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.266075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.276316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.276335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.276345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.284963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.284983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.284991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.295182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.295201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.295209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.303535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.303556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.303563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.314110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.314130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.314138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.323617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.323636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.333099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.333118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.333136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.342664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.342683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.342691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.352832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.352851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.352859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.362528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.362551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.371342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.371362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.371370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.382035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.382064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.393613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.393633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.393641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.402166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.402186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.402194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.412867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.412895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.422926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.422944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.422952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.431319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.431338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.431346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.442517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.442536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.442544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.301 [2024-05-15 03:18:02.452459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.301 [2024-05-15 03:18:02.452482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.301 [2024-05-15 03:18:02.452490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.462052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.462072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.462079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.470341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.470360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.470367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.480602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.480630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.491022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.491040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.491047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.500001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.500020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.500029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.509414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.509434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.509442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.519760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.519780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.519788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.529477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.529497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.529510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.538694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.538713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.538721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.547900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.547920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.547928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.558516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.558536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.558543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.568625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.568645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.568653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.576576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.576595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.576603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.587289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.587309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.587317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.598240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.598259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.598267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.606846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.606865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.606872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.617213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.617235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.617242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.626324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.626343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.626351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.635617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.635644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.647203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.595 [2024-05-15 03:18:02.647223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.595 [2024-05-15 03:18:02.647231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.595 [2024-05-15 03:18:02.655017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.655036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.655044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.667423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.667443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.667450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.679895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.679923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.688129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.688148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.688156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.699010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.699029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.699037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.710889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.710909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.710917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.596 [2024-05-15 03:18:02.719868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.596 [2024-05-15 03:18:02.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.596 [2024-05-15 03:18:02.719895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.731252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.731272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.731280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.741746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.741766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.741774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.750822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.750843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.750851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.761829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.761851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.761859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.770413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.770432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.783104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.783125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.783133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.791564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.791588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.791596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.803537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.803557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.803565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.816344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.816365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.826946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.826966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.826974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.836567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.836588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.836596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.847448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.847474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.847482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.855348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.855368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.855376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.866743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.866763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.866772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.876798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.876818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.876829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.884987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.885007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.885015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.896722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.896741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.896749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.908382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.908401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.908410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.917300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.888 [2024-05-15 03:18:02.917319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.888 [2024-05-15 03:18:02.917327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.888 [2024-05-15 03:18:02.927648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.927670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.927678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.937944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.937964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.937972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.947496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.947525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.956590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.956611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.956619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.967202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.967222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.967233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.976116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.976137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.976146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.985811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.985839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:02.996547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:02.996569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:02.996577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:03.005483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:03.005504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:03.005512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:03.015485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:03.015506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:03.015514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:03.025705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:03.025726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:03.025734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:03.035490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:03.035526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:03.035534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.889 [2024-05-15 03:18:03.044733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:31.889 [2024-05-15 03:18:03.044753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.889 [2024-05-15 03:18:03.044761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.055413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.055437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.065259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.065279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.065287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.074787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.074807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.074815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.084030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.084056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.094521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.094541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.094549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.103808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.103828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.103836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.115981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.116010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.128585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.128607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.128615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.139678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.139699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.139707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.148583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.148603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.148611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.158815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.158835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.158843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.169048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.169069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.169078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.178082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.178103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.178111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.188236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.188256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.188264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.196219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.196239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.196246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.207744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.207764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.207772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.217656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.217675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.217694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.226624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.226644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.235415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.235434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.246095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.246115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.246123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.255944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.148 [2024-05-15 03:18:03.255963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.148 [2024-05-15 03:18:03.255972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.148 [2024-05-15 03:18:03.265608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.149 [2024-05-15 03:18:03.265628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.149 [2024-05-15 03:18:03.265636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.149 [2024-05-15 03:18:03.277377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.149 [2024-05-15 03:18:03.277397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.149 [2024-05-15 03:18:03.277405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.149 [2024-05-15 03:18:03.288779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.149 [2024-05-15 03:18:03.288798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.149 [2024-05-15 03:18:03.288806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.149 [2024-05-15 03:18:03.297404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.149 [2024-05-15 03:18:03.297425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.149 [2024-05-15 03:18:03.297432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.149 [2024-05-15 03:18:03.308229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.149 [2024-05-15 03:18:03.308250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.149 [2024-05-15 03:18:03.308258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.418 [2024-05-15 03:18:03.319675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.418 [2024-05-15 03:18:03.319697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.319705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.330153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.330173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.330181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.338950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.338970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.338978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.349062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.349082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.349090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.358174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.358194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.358202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.369175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.369194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.369202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.378728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.378747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.378754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.387545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.387565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.387572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.396771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.396791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.396802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.406947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.416013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.416032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.426210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.426230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.426237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.435507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.435527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.435535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.446390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.446410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.446418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.454965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.454984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.454992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.466811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.466831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.466839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.479303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.479323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.479331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.492488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.492512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.492520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.504954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.504974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.504983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.513702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.513722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.513729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.526085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.526113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.534540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.534559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.534567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.546861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.546897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.546906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.559234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.559253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.559261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.419 [2024-05-15 03:18:03.571890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.419 [2024-05-15 03:18:03.571911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.419 [2024-05-15 03:18:03.571919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.583557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.583576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.583584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.592473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.592492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.604419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.604438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.604446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.612983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.613010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.625582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.625602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.625610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.637082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.637102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.637110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.645816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.645836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.645844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.659105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.659124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.659132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.667796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.667815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.677541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.677561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.677572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.687177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.687197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.687204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.696727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.696745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.696754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 [2024-05-15 03:18:03.705952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df1910) 00:23:32.682 [2024-05-15 03:18:03.705971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.682 [2024-05-15 03:18:03.705979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.682 00:23:32.682 Latency(us) 00:23:32.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.682 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:32.682 nvme0n1 : 2.00 25176.71 98.35 0.00 0.00 5077.96 2236.77 18919.96 00:23:32.682 =================================================================================================================== 00:23:32.682 Total : 25176.71 98.35 0.00 0.00 5077.96 2236.77 18919.96 00:23:32.682 0 00:23:32.682 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:32.682 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:32.682 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:32.682 | .driver_specific 00:23:32.682 | .nvme_error 00:23:32.682 | .status_code 00:23:32.682 | .command_transient_transport_error' 00:23:32.682 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1159764 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1159764 ']' 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1159764 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1159764 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1159764' 00:23:32.940 killing process with pid 1159764 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1159764 00:23:32.940 Received shutdown signal, test time was about 2.000000 seconds 00:23:32.940 00:23:32.940 Latency(us) 00:23:32.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.940 =================================================================================================================== 00:23:32.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.940 03:18:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1159764 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1160369 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1160369 /var/tmp/bperf.sock 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1160369 ']' 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.197 03:18:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:33.197 [2024-05-15 03:18:04.212767] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:33.197 [2024-05-15 03:18:04.212815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160369 ] 00:23:33.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:33.197 Zero copy mechanism will not be used. 00:23:33.197 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.197 [2024-05-15 03:18:04.267478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.197 [2024-05-15 03:18:04.337526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.130 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.697 nvme0n1 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:34.697 03:18:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:34.697 Zero copy mechanism will not be used. 00:23:34.697 Running I/O for 2 seconds... 00:23:34.697 [2024-05-15 03:18:05.676142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.676178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.676190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.685334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.685361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.685370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.693325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.693349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.693357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.702039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.702061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.702069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.711175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.711196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.711204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.719971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.720002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.728944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.728965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.728974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.736790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.736811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.736821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.744035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.744054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.744062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.751011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.751032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.751041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.757892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.757913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.757920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.764542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.764562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.764570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.770587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.770610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.777238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.777260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.777269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.783018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.783040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.789112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.697 [2024-05-15 03:18:05.789133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.697 [2024-05-15 03:18:05.789141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.697 [2024-05-15 03:18:05.795025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.795047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.795055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.800712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.800734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.806080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.806101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.806109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.811765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.811787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.811794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.817873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.817894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.823490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.823511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.823519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.829206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.829227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.829235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.834891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.834916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.834924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.840585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.840606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.840614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.846211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.846232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.846240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.851984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.852005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.852012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.698 [2024-05-15 03:18:05.857764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.698 [2024-05-15 03:18:05.857785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.698 [2024-05-15 03:18:05.857793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.863532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.863559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.863567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.869346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.869368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.869376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.875110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.875132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.875140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.880840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.880869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.886612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.886633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.886640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.892437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.892458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.892472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.898230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.898251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.903834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.903855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.903862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.909516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.909537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.909545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.915257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.915278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.915286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.921181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.921201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.921209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.927037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.927058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.927065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.932843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.957 [2024-05-15 03:18:05.932874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.957 [2024-05-15 03:18:05.938505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.957 [2024-05-15 03:18:05.938526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.938533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.944224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.944246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.944253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.950108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.950129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.956152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.956174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.956183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.962196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.962226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.968260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.968282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.974756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.974781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.974789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.981005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.981026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.981034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.987322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.987343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.987351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.993479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.993501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.993509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:05.999636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:05.999658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:05.999666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.005803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.005825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.005833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.012303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.012325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.012333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.018770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.018791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.018800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.024802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.024824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.024832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.030896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.030917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.030926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.036691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.036712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.042587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.042608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.048591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.048611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.048619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.054595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.054616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.054624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.060589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.060610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.060617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.066414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.066434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.066442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.072502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.072522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.072530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.078703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.078723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.078731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.084658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.084680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.084688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.090270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.090295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.090302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.096075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.096096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.096105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.102239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.102261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.107947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.107968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.107976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.958 [2024-05-15 03:18:06.113610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:34.958 [2024-05-15 03:18:06.113632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.958 [2024-05-15 03:18:06.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.119333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.119354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.119362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.124980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.125001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.125009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.130614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.130635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.130643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.136302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.136323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.136332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.141872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.141893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.147416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.147445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.153139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.153159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.153167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.158873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.158894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.158902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.164322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.164342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.169681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.169701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.169708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.217 [2024-05-15 03:18:06.175045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.217 [2024-05-15 03:18:06.175066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.217 [2024-05-15 03:18:06.175074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.180566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.180587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.180595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.186439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.186460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.193033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.193054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.193062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.200540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.200560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.200568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.208360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.208381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.208389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.216364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.216386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.216395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.225000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.225023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.225032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.234798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.234820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.234829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.243582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.243604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.252118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.252140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.261473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.261495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.261504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.270611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.270633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.270642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.280726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.280747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.280755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.289855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.289877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.289885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.299313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.299335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.299343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.309539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.309561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.309570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.318131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.318152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.318161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.326071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.326092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.326101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.334961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.334983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.334995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.343380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.343402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.343410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.352902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.352923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.352931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.362399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.362421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.362429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.218 [2024-05-15 03:18:06.371808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.218 [2024-05-15 03:18:06.371829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.218 [2024-05-15 03:18:06.371837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.381274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.381296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.381305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.390491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.390512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.390521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.400117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.409162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.409184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.409193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.418098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.418124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.418132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.427531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.427554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.427562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.436651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.436673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.436682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.446433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.446455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.446463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.455446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.455475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.455484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.464167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.464190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.464198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.472067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.472089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.472101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.479833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.479855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.479863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.487221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.487243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.487252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.494275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.494296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.494304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.501222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.501243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.501252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.507951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.507972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.507980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.515422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.515444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.515452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.520006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.520027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.520034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.526061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.526081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.526089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.532127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.532148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.532156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.540028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.540050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.540058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.548400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.548423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.556929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.556951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.556960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.566115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.566136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.575472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.575494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.478 [2024-05-15 03:18:06.575502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.478 [2024-05-15 03:18:06.584407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.478 [2024-05-15 03:18:06.584429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.584437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.594127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.594148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.602624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.602644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.602652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.610507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.610528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.610536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.618268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.618288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.625401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.625425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.625433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.631964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.631985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.631992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.479 [2024-05-15 03:18:06.638390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.479 [2024-05-15 03:18:06.638411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.479 [2024-05-15 03:18:06.638419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.738 [2024-05-15 03:18:06.644675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.644696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.644704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.651507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.651528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.651535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.659133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.659162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.666460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.666487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.666495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.673569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.673589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.673597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.679848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.679869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.679876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.685969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.685998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.692024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.692044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.692052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.698211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.698232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.698240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.704164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.704184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.704191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.710119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.710139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.710146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.715947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.715975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.721847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.721867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.721875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.727792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.727813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.727821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.733548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.733569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.733581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.739294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.739315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.739322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.745216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.745237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.745244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.751090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.751110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.751119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.756926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.756947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.756954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.762700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.762720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.762728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.768455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.768481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.768489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.774073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.774101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.779770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.779789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.779796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.785540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.785560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.785567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.791331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.791352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.791360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.796994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.797014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.797022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.802850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.739 [2024-05-15 03:18:06.802870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.739 [2024-05-15 03:18:06.802879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.739 [2024-05-15 03:18:06.808643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.808663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.808670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.814297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.814317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.814324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.819952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.819973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.819980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.825722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.825743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.825751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.831455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.831482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.831494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.837156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.837184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.842843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.842871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.848549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.848576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.854286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.854307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.854315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.860017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.860037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.860045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.865746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.865766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.865774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.871513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.871533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.871540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.877336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.877356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.877364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.883114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.883138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.883145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.888828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.888848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.888856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.740 [2024-05-15 03:18:06.894635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:35.740 [2024-05-15 03:18:06.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.740 [2024-05-15 03:18:06.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.999 [2024-05-15 03:18:06.900406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.900426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.900434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.906023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.906043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.911640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.911660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.911667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.917250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.917278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.922699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.922719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.922727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.928199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.928219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.928226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.933793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.933813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.933821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.939422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.939449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.945063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.945083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.950748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.950776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.956312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.956332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.956339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.961808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.961828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.961837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.967396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.967416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.967424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.973042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.973062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.973070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.978650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.978670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.978682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.984223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.984243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.989887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.989908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.989916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:06.995589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:06.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:06.995618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.001416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.001435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.001443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.007038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.007057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.007065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.012712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.018480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.018507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.024194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.024215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.024223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.029924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.029947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.035748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.035767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.035776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.041584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.041602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.041610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.047897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.047917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.000 [2024-05-15 03:18:07.047925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.000 [2024-05-15 03:18:07.053637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.000 [2024-05-15 03:18:07.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.053665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.059295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.059317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.059326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.064914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.064935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.064944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.070439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.070460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.070475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.076015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.076036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.076045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.081639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.081660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.081669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.087421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.087442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.087451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.093276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.093297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.098914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.098935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.098944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.104613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.104634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.104645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.110387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.110407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.110416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.116124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.116145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.116155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.121779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.121799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.121809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.127554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.127576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.127588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.133348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.133369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.133378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.139030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.139052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.139061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.144749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.144771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.144780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.150728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.150749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.150759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.001 [2024-05-15 03:18:07.156602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.001 [2024-05-15 03:18:07.156624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.001 [2024-05-15 03:18:07.156634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.162457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.162487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.162496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.168493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.168514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.168523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.174493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.174514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.174522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.180340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.180361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.180370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.186182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.186204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.186212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.192999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.193022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.193031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.199473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.199509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.199520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.205986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.206007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.206017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.212179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.212201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.212210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.218253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.218274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-05-15 03:18:07.218283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.260 [2024-05-15 03:18:07.224305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.260 [2024-05-15 03:18:07.224326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.224335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.230196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.230230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.236174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.236196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.242215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.242237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.242246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.248185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.248215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.254019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.254040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.254049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.259907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.259929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.259938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.265813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.265833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.265843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.271689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.271711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.271720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.277541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.277562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.277571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.283263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.283287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.283297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.289103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.289124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.289133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.294980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.295000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.295009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.300682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.300703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.300713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.306412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.306433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.306442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.312142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.312162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.317924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.317944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.317954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.323720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.323740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.329527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.329548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.329556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.335362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.335382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.335392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.341029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.341049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.341057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.346712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.346732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.352576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.352596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.352604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.358428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.358448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.358456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.364336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.364358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.364367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.370300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.370321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.370330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.376299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.376328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.382189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.382210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.382221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.388055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.388076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.388084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.394012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.394033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.394040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.399941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.399962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.399969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.405705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.405726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.405734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.411606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.411628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.411635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.261 [2024-05-15 03:18:07.417562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.261 [2024-05-15 03:18:07.417582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-05-15 03:18:07.417590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.423348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.423370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.423377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.429201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.429222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.429230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.435190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.435214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.435222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.441090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.441111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.446875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.446895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.446903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.452668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.452688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.452696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.458508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.458528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.458536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.464285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.464306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.464314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.470083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.470104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.470112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.475884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.475904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.475912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.481652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.481673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.481684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.487408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.487428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.487436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.493177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.493198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.493206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.498877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.498899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.498907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.504690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.504711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.504719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.510426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.510454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.516136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.516157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.516164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.521811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.521831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.521839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.527509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.527529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.527537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.533338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.533362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.533370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.539069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.539090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.539098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.544685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.544705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.544713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.550332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.550352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.550360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.556101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.556129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.561825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.561845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.561853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.567440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.567461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.567476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.573159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.573179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.573187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.578939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.520 [2024-05-15 03:18:07.578959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.520 [2024-05-15 03:18:07.578967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.520 [2024-05-15 03:18:07.584584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.584605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.584612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.590600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.590621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.590629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.596931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.596950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.603006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.603027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.603035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.608736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.608756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.608764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.614751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.614771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.614779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.620523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.620544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.620551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.626752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.626773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.626781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.633295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.633316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.639344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.639365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.639373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.645365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.645385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.645393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.651642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.651670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.657575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.657596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.657604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.663181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.663202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.663210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.521 [2024-05-15 03:18:07.668532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11a92b0) 00:23:36.521 [2024-05-15 03:18:07.668552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.521 [2024-05-15 03:18:07.668560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.521 00:23:36.521 Latency(us) 00:23:36.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.521 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:36.521 nvme0n1 : 2.00 4875.55 609.44 0.00 0.00 3278.71 762.21 10143.83 00:23:36.521 =================================================================================================================== 00:23:36.521 Total : 4875.55 609.44 0.00 0.00 3278.71 762.21 10143.83 00:23:36.521 0 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:36.779 | .driver_specific 00:23:36.779 | .nvme_error 00:23:36.779 | .status_code 00:23:36.779 | .command_transient_transport_error' 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1160369 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1160369 ']' 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1160369 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1160369 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1160369' 00:23:36.779 killing process with pid 1160369 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1160369 00:23:36.779 Received shutdown signal, test time was about 2.000000 seconds 00:23:36.779 00:23:36.779 Latency(us) 00:23:36.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.779 =================================================================================================================== 00:23:36.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.779 03:18:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1160369 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1161456 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1161456 /var/tmp/bperf.sock 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1161456 ']' 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.037 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:37.037 [2024-05-15 03:18:08.170018] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:37.038 [2024-05-15 03:18:08.170062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161456 ] 00:23:37.038 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.295 [2024-05-15 03:18:08.222523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.295 [2024-05-15 03:18:08.290797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.860 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.860 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:23:37.860 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:37.860 03:18:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.118 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.376 nvme0n1 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:38.376 03:18:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:38.376 Running I/O for 2 seconds... 00:23:38.376 [2024-05-15 03:18:09.526299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.376 [2024-05-15 03:18:09.526522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.376 [2024-05-15 03:18:09.526552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.376 [2024-05-15 03:18:09.536009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.376 [2024-05-15 03:18:09.536199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.376 [2024-05-15 03:18:09.536222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.545694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.545892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.545912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.555375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.555567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.555590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.564892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.565071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.565089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.574423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.574628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.574647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.584086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.584268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.584287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.593611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.593840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.593858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.603115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.603294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.612652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.612858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.612875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.622117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.622295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.622320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.631576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.631772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.631791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.641074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.641257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.641283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.650512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.650690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.650707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.659986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.669463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.669643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.634 [2024-05-15 03:18:09.669659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.634 [2024-05-15 03:18:09.678881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.634 [2024-05-15 03:18:09.679056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.679074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.688353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.688556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.688574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.697824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.698000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.698017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.707277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.707454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.716825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.717001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.717017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.726337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.726538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.726557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.735801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.735976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.735993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.745299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.745501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.754984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.755164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.764482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.764681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.764700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.774061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.774239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.774255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.783490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.783686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.635 [2024-05-15 03:18:09.793234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.635 [2024-05-15 03:18:09.793414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.635 [2024-05-15 03:18:09.793431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.802918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.803100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.803124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.812532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.812735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.812753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.822018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.822196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.822214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.831610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.831809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.841181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.841377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.841395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.850789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.850982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.851001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.860293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.860473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.860490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.869734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.869910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.869927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.879250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.879447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.879470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.888731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.888907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.888927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.898177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.898354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.898372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.907856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.908037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.917564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.917771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.917789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.927050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.927229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.927254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.936600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.936793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.936811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.946059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.946254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.946273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.955649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.955825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.955842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.965103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.965281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.965297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.974561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.974743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.974761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.984044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.984241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.984260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.894 [2024-05-15 03:18:09.993509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.894 [2024-05-15 03:18:09.993685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.894 [2024-05-15 03:18:09.993703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.003281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.003470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.003488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.014449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.014661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.014689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.024259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.024441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.024459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.034727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.034927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.044615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.044800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.044818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.895 [2024-05-15 03:18:10.054350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:38.895 [2024-05-15 03:18:10.054544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.895 [2024-05-15 03:18:10.054562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.064083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.064264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.064282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.074038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.074235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.074255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.083823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.084004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.093558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.093737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.093755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.103246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.103429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.103447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.113002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.113184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.113201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.122725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.122907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.122932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.132440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.132627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.132646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.142174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.142362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.142380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.151935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.152117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.152143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.161686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.161869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.161893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.171414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.171604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.171624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.181124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.181305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.181331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.190837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.191018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.191043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.200548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.200730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.200754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.210251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.210431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.210448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.219981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.220159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.220176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.229692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.229875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.229896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.239403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.239590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.239608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.249139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.249320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.249338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.258920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.259100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.259125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.268626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.268807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.154 [2024-05-15 03:18:10.268833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.154 [2024-05-15 03:18:10.278323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.154 [2024-05-15 03:18:10.278503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.155 [2024-05-15 03:18:10.278522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.155 [2024-05-15 03:18:10.288035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.155 [2024-05-15 03:18:10.288216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.155 [2024-05-15 03:18:10.288235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.155 [2024-05-15 03:18:10.297761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.155 [2024-05-15 03:18:10.297939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.155 [2024-05-15 03:18:10.297957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.155 [2024-05-15 03:18:10.307476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.155 [2024-05-15 03:18:10.307657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.155 [2024-05-15 03:18:10.307674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.317182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.317366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.413 [2024-05-15 03:18:10.317384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.326865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.327043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.413 [2024-05-15 03:18:10.327060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.336575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.336759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.413 [2024-05-15 03:18:10.336783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.346251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.346432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.413 [2024-05-15 03:18:10.346458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.356115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.356296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.413 [2024-05-15 03:18:10.356321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.413 [2024-05-15 03:18:10.365800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.413 [2024-05-15 03:18:10.365983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.366006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.375515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.375695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.375713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.385198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.385376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.385394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.394909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.395090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.395114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.404613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.404791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.414317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.414499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.414517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.424044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.424223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.433726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.433907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.433931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.443414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.443600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.443619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.453111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.453291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.453316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.462810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.462989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.463007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.472518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.472699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.472723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.482213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.482392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.482416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.491908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.492087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.492112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.501601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.501782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.501798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.511299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.511484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.521013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.521194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.521218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.530694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.530874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.530891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.540339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.540528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.540546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.550066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.550248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.559797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.559978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.559995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.414 [2024-05-15 03:18:10.569501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.414 [2024-05-15 03:18:10.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.414 [2024-05-15 03:18:10.569706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.579212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.579395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.588911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.589091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.589108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.598610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.598791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.608306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.608516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.608535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.618027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.618220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.618238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.627746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.627941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.637459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.637634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.637653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.647178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.647338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.647355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.656895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.657062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.657079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.673 [2024-05-15 03:18:10.666621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.673 [2024-05-15 03:18:10.666787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.673 [2024-05-15 03:18:10.666804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.676206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.676391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.676415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.685929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.686105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.686122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.695627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.695806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.695823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.705331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.705515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.705532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.715071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.715250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.715268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.724795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.724999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.734503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.734685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.734710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.744173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.744354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.754126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.754306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.754323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.763811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.763990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.764007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.773536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.773716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.783234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.783412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.783430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.792967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.793183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.802714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.802896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.802913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.812428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.812616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.822110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.822290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.822311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.674 [2024-05-15 03:18:10.831823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.674 [2024-05-15 03:18:10.832004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.674 [2024-05-15 03:18:10.832022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.841514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.841692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.841709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.851199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.851378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.851395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.860908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.861087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.861104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.870601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.870780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.880280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.880460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.880481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.890023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.890203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.890220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.899638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.899840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.899859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.909081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.909256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.909276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.918539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.918734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.918759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.927982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.928156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.928172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.937440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.937622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.937640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.946897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.947090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.947107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.956425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.956626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.956645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.965883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.966055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.966072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.975330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.975508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.975525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.984785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.933 [2024-05-15 03:18:10.984979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.933 [2024-05-15 03:18:10.984998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.933 [2024-05-15 03:18:10.994266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:10.994447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:10.994469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.003707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.003881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.003898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.013172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.013367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.013385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.022639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.022834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.032051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.032225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.032242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.041527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.041729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.041747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.050963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.051139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.060727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.060906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.060922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.070316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.070510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.070528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.079883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.080060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.080077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.934 [2024-05-15 03:18:11.089257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:39.934 [2024-05-15 03:18:11.089432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.934 [2024-05-15 03:18:11.089449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.192 [2024-05-15 03:18:11.099033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.192 [2024-05-15 03:18:11.099212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.192 [2024-05-15 03:18:11.099229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.192 [2024-05-15 03:18:11.108537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.192 [2024-05-15 03:18:11.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.192 [2024-05-15 03:18:11.108733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.117999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.118175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.118192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.127480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.127676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.127694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.136963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.137135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.137152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.146384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.146581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.146600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.155971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.156181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.165405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.165587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.165605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.174868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.175061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.175080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.184359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.184539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.193792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.193966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.193983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.203278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.203476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.203494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.212723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.212899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.212916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.222200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.222393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.222411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.231648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.231826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.241098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.241274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.241294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.250551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.250746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.250764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.260074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.260248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.260265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.269566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.269760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.269778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.279249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.279426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.279445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.288747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.288922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.288939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.298213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.298391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.298408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.307743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.307928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.307946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.317549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.317735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.317753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.327283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.327470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.327488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.337015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.337199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.337218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.193 [2024-05-15 03:18:11.346720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.193 [2024-05-15 03:18:11.346890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.193 [2024-05-15 03:18:11.346907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.356562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.356741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.356757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.366201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.366388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.366406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.375746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.375944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.375969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.385222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.385417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.385435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.394703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.394885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.394908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.404145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.452 [2024-05-15 03:18:11.404324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.452 [2024-05-15 03:18:11.404341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.452 [2024-05-15 03:18:11.413597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.413805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.413823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.423064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.423260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.423278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.432503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.432698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.441982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.442180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.442198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.451478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.451670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.451687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.461028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.461226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.461243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.470519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.470714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.470732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.479992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.480169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.480186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.489457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.489643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.489663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.498985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.499172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.499190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.508672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.508868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 [2024-05-15 03:18:11.518307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833cd0) with pdu=0x2000190fd640 00:23:40.453 [2024-05-15 03:18:11.518505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.453 [2024-05-15 03:18:11.518522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.453 00:23:40.453 Latency(us) 00:23:40.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.453 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:40.453 nvme0n1 : 2.00 26509.63 103.55 0.00 0.00 4819.41 4473.54 10941.66 00:23:40.453 =================================================================================================================== 00:23:40.453 Total : 26509.63 103.55 0.00 0.00 4819.41 4473.54 10941.66 00:23:40.453 0 00:23:40.453 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:40.453 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:40.453 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:40.453 | .driver_specific 00:23:40.453 | .nvme_error 00:23:40.453 | .status_code 00:23:40.453 | .command_transient_transport_error' 00:23:40.453 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1161456 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1161456 ']' 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1161456 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1161456 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1161456' 00:23:40.712 killing process with pid 1161456 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1161456 00:23:40.712 Received shutdown signal, test time was about 2.000000 seconds 00:23:40.712 00:23:40.712 Latency(us) 00:23:40.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.712 =================================================================================================================== 00:23:40.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.712 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1161456 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1162153 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1162153 /var/tmp/bperf.sock 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1162153 ']' 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:40.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.970 03:18:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:40.970 [2024-05-15 03:18:12.018146] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:40.970 [2024-05-15 03:18:12.018192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162153 ] 00:23:40.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:40.970 Zero copy mechanism will not be used. 00:23:40.970 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.970 [2024-05-15 03:18:12.071124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.229 [2024-05-15 03:18:12.143271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.795 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.795 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:23:41.795 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:41.795 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:42.053 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:42.053 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.053 03:18:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.053 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.053 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.053 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.311 nvme0n1 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:42.311 03:18:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:42.569 Zero copy mechanism will not be used. 00:23:42.569 Running I/O for 2 seconds... 00:23:42.569 [2024-05-15 03:18:13.562374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.562778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.562806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.569176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.569555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.569576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.575922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.576198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.582675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.583027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.583047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.588492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.588865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.588885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.594429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.594811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.594830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.600446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.600822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.606793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.607158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.607176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.613303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.613694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.619744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.620107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.569 [2024-05-15 03:18:13.620125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.569 [2024-05-15 03:18:13.625587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.569 [2024-05-15 03:18:13.625942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.631276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.631641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.636519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.636893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.636912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.642033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.642410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.647246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.647595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.647613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.652560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.652924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.652943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.658449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.658818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.658837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.663846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.664208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.664226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.669282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.669652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.669671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.675700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.676106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.676124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.683377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.683826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.683844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.691177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.691648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.691674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.697391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.697775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.702835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.703181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.703203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.708907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.709291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.709309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.714844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.715198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.715216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.720284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.720709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.720738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.725329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.570 [2024-05-15 03:18:13.725698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.570 [2024-05-15 03:18:13.725717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.570 [2024-05-15 03:18:13.730441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.829 [2024-05-15 03:18:13.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.829 [2024-05-15 03:18:13.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.829 [2024-05-15 03:18:13.735919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.736263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.736281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.740673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.741033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.741051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.745644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.746023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.746042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.750808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.751176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.751194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.755833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.756194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.756212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.762100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.762554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.762572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.768922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.769277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.769295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.775207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.775624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.775642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.781884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.782249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.782268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.787955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.788342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.788360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.794196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.794568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.794587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.800407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.800778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.800797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.806272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.806657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.806675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.813209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.813583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.813602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.820131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.820549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.820567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.827308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.827707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.827725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.833900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.834290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.834309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.840347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.840778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.840798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.846925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.847290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.847308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.853729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.854109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.860241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.860639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.860661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.867059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.867420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.867438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.874090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.874540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.874559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.882128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.882610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.890580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.891006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.891025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.898697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.899116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.899134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.907256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.907688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.907706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.915477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.830 [2024-05-15 03:18:13.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.830 [2024-05-15 03:18:13.915972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.830 [2024-05-15 03:18:13.924100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.924509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.924528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.932671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.933126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.933144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.941673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.942114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.942133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.949846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.950343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.950361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.958129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.958592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.966581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.967039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.967057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.974316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.974613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.981716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.982076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.831 [2024-05-15 03:18:13.989090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:42.831 [2024-05-15 03:18:13.989591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.831 [2024-05-15 03:18:13.989610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.091 [2024-05-15 03:18:13.997170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.091 [2024-05-15 03:18:13.997598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.091 [2024-05-15 03:18:13.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.091 [2024-05-15 03:18:14.005367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.091 [2024-05-15 03:18:14.005833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.091 [2024-05-15 03:18:14.005853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.091 [2024-05-15 03:18:14.013299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.091 [2024-05-15 03:18:14.013653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.091 [2024-05-15 03:18:14.013672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.091 [2024-05-15 03:18:14.020060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.020545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.020563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.027558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.027902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.027921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.034559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.035044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.035062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.042197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.042617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.042636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.049941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.050460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.050482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.058116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.058628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.066141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.066585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.066604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.073616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.074007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.074025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.079973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.080332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.080350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.086435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.086833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.086851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.092005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.092363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.092381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.096933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.097299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.097317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.101697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.102051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.106872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.107218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.107236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.111735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.112094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.118059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.118501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.118519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.124552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.124921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.124940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.130255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.130628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.136403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.136842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.136860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.144043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.144474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.144493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.151437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.151890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.151910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.158447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.158905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.158924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.166361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.166807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.166825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.174130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.174539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.174562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.180871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.181225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.181244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.186909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.187293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.187311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.193101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.193455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.193480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.092 [2024-05-15 03:18:14.198988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.092 [2024-05-15 03:18:14.199337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.092 [2024-05-15 03:18:14.199355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.205573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.205946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.205963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.211530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.211902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.211920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.217309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.217685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.217704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.223244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.223618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.223647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.229253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.229622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.229641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.235956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.236332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.236350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.242445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.242840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.242859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.093 [2024-05-15 03:18:14.248662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.093 [2024-05-15 03:18:14.249041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.093 [2024-05-15 03:18:14.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.254924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.255289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.255307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.260844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.261264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.261283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.266996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.267392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.267410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.274149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.274610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.274628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.282508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.283015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.283034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.290960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.291402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.291421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.299579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.300054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.300072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.307899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.308398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.308417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.316461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.353 [2024-05-15 03:18:14.316929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.353 [2024-05-15 03:18:14.316946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.353 [2024-05-15 03:18:14.324681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.325145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.325163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.333000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.333446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.341364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.341800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.341819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.349522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.349983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.350001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.357529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.357950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.365273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.365761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.365780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.374183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.374595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.374614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.382398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.390942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.391393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.391411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.399591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.400059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.400077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.408123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.408542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.408561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.416560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.416985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.417003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.424564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.425039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.425058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.433092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.433593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.433612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.442050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.442485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.442504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.450071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.450493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.450512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.458162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.458591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.458609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.466126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.466521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.466539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.474305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.474650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.474668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.482080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.482487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.482506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.490567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.490979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.490998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.498373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.498673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.498690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.505163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.505622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.354 [2024-05-15 03:18:14.512903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.354 [2024-05-15 03:18:14.513323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.354 [2024-05-15 03:18:14.513342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.520474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.520849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.520867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.528085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.528497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.528514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.536063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.536505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.536523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.543896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.544291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.544309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.550902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.551250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.551269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.559103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.559490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.559508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.566919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.567344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.567365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.574733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.575112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.575130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.582589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.582975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.582993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.589083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.589383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.589401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.596076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.596500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.596518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.603609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.603995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.604014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.610785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.611179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.611197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.617828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.618222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.618239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.624790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.625151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.625168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.631967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.632293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.632312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.639392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.639717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.639735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.647342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.647670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.647688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.654107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.654473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.661167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.616 [2024-05-15 03:18:14.661471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.616 [2024-05-15 03:18:14.661489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.616 [2024-05-15 03:18:14.667610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.667911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.667928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.673303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.673579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.673597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.678872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.679145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.679162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.683717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.683938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.683956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.687845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.688077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.688095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.691828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.692065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.692084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.695726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.695942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.695960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.699594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.699811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.699829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.703488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.703732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.707323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.707549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.707567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.711153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.711369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.715012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.715257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.718840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.719067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.719093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.722631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.722851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.722869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.726433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.726656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.726674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.730205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.730432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.730450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.733965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.734183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.734201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.738444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.738699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.738717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.742319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.742544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.742561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.746127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.746346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.746364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.749994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.750232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.750250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.754055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.754281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.757882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.758105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.758123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.761730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.761954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.765535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.617 [2024-05-15 03:18:14.765776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.617 [2024-05-15 03:18:14.765794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.617 [2024-05-15 03:18:14.769693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.618 [2024-05-15 03:18:14.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.618 [2024-05-15 03:18:14.769930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.618 [2024-05-15 03:18:14.774586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.618 [2024-05-15 03:18:14.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.618 [2024-05-15 03:18:14.774843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.779787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.780009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.780027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.784320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.784550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.784569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.788666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.788892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.788910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.792952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.793172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.793191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.797484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.797721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.797739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.801608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.801843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.801861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.806009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.806226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.806244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.810293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.810526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.814823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.815052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.815070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.819221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.819440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.819458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.823596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.823823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.823842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.827826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.828064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.828086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.832357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.832580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.832598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.836847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.837080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.837098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.841163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.841388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.841406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.845490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.845710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.849716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.849929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.849947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.853988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.854233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.854251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.858244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.858474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.858508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.862579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.862816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.867107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.867341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.867359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.871287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.878 [2024-05-15 03:18:14.871555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.878 [2024-05-15 03:18:14.875446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.878 [2024-05-15 03:18:14.875679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.879662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.879918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.884207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.884438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.888665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.888877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.892931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.893148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.893166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.897234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.897453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.897477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.901523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.901773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.905978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.906237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.906255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.910276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.910513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.910531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.914713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.914941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.914959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.919024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.919253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.923283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.923510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.923528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.927815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.928039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.928057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.932299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.932527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.932545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.936583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.936838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.936855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.940858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.941082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.941105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.945126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.945366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.945384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.949747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.949978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.949997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.954001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.954231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.954249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.958266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.958485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.962791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.963013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.963031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.967116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.967343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.967361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.971393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.971628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.971646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.975789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.976021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.976038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.980426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.980640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.984749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.984976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.984994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.988910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.989156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.989175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.993139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.879 [2024-05-15 03:18:14.993367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.879 [2024-05-15 03:18:14.993385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.879 [2024-05-15 03:18:14.997875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:14.998096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:14.998114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.002114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.002324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.002342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.006615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.006847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.010890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.011122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.011140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.015392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.015643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.015665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.019651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.019876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.019894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.024190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.024408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.024427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.028519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.028736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.028754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.032900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.033108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.033142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.880 [2024-05-15 03:18:15.037286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:43.880 [2024-05-15 03:18:15.037514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.880 [2024-05-15 03:18:15.037532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.041574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.041796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.041814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.046000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.046230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.050170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.050397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.054405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.054632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.054650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.058662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.058881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.058899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.062846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.063067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.063084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.067190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.067415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.067434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.071785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.072026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.076093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.076319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.076338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.080446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.080678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.080696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.085054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.085305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.085323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.089440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.089670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.089688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.093846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.094064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.098086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.098308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.098325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.102435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.102660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.102677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.106914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.107136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.107154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.111170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.140 [2024-05-15 03:18:15.111387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.140 [2024-05-15 03:18:15.111406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.140 [2024-05-15 03:18:15.115589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.115813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.115831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.119897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.120130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.120148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.124427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.124653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.124672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.128774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.128997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.129019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.133017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.133258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.137559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.137777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.137794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.141813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.142031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.142049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.146027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.146245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.146263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.150371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.150611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.150629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.154886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.155111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.155129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.159085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.159298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.159315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.163231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.163458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.163481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.167304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.167545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.167563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.171798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.172021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.177074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.177298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.177316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.181813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.182031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.182048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.186209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.186437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.186455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.190619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.190842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.190859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.194820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.195045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.195063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.199299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.199564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.199582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.204998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.205325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.205343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.210421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.210705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.210724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.216437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.216757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.216776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.223990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.224354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.224372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.231315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.231655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.231673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.238859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.239227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.239245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.246270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.246489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.246508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.251578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.141 [2024-05-15 03:18:15.251758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.141 [2024-05-15 03:18:15.251776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.141 [2024-05-15 03:18:15.257002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.257235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.257254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.262602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.262801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.262823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.267740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.267913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.267931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.271973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.272164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.272182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.276311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.276491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.276508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.281275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.281497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.281516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.287114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.287320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.287338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.142 [2024-05-15 03:18:15.293313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.142 [2024-05-15 03:18:15.293508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.142 [2024-05-15 03:18:15.293525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.402 [2024-05-15 03:18:15.300901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.402 [2024-05-15 03:18:15.301206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.402 [2024-05-15 03:18:15.301224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.402 [2024-05-15 03:18:15.307073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.402 [2024-05-15 03:18:15.307251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.402 [2024-05-15 03:18:15.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.402 [2024-05-15 03:18:15.313841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.402 [2024-05-15 03:18:15.314077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.402 [2024-05-15 03:18:15.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.402 [2024-05-15 03:18:15.320137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.402 [2024-05-15 03:18:15.320379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.402 [2024-05-15 03:18:15.320397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.402 [2024-05-15 03:18:15.326000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.402 [2024-05-15 03:18:15.326198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.326215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.332045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.332247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.337583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.337828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.337845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.342738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.342927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.342945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.348931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.349179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.353150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.353368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.353387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.357179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.357394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.361206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.361382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.365097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.365272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.365289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.368977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.369150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.369167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.372823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.373003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.373020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.376715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.376885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.380568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.380752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.380769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.384412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.384606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.384632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.388243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.388414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.388432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.392058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.392230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.392250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.395911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.396078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.396095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.399728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.399905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.399921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.403568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.403740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.403757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.407345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.407528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.407545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.411183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.411353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.411370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.414964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.415155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.418765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.418930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.422932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.423108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.423124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.426755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.426938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.426956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.430590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.430766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.430783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.434703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.435063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.435081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.438841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.439015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.403 [2024-05-15 03:18:15.439032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.403 [2024-05-15 03:18:15.442572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.403 [2024-05-15 03:18:15.442744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.442762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.446610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.446785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.450712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.450904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.454540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.454704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.454721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.458483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.458658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.458674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.463069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.463242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.463259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.468241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.468408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.468425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.472746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.472945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.477401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.477609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.477627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.481819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.481993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.482009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.486114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.486303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.486321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.490648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.490821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.490839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.495085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.495254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.495271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.499419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.499597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.499625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.503662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.503836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.503853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.507813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.508004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.508021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.512036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.512221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.512245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.516256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.516430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.516447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.520726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.520892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.520909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.525856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.526073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.526091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.530395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.530590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.530608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.534800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.534999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.535018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.539167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.543489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.543673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.543690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.547716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.547896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.547913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.404 [2024-05-15 03:18:15.551865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x833e00) with pdu=0x2000190fef90 00:23:44.404 [2024-05-15 03:18:15.552001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.404 [2024-05-15 03:18:15.552018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.404 00:23:44.404 Latency(us) 00:23:44.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.404 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:44.404 nvme0n1 : 2.00 5515.78 689.47 0.00 0.00 2896.35 1802.24 13506.11 00:23:44.404 =================================================================================================================== 00:23:44.404 Total : 5515.78 689.47 0.00 0.00 2896.35 1802.24 13506.11 00:23:44.404 0 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:44.664 | .driver_specific 00:23:44.664 | .nvme_error 00:23:44.664 | .status_code 00:23:44.664 | .command_transient_transport_error' 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1162153 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1162153 ']' 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1162153 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1162153 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1162153' 00:23:44.664 killing process with pid 1162153 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1162153 00:23:44.664 Received shutdown signal, test time was about 2.000000 seconds 00:23:44.664 00:23:44.664 Latency(us) 00:23:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.664 =================================================================================================================== 00:23:44.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.664 03:18:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1162153 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1159516 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1159516 ']' 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1159516 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1159516 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1159516' 00:23:44.923 killing process with pid 1159516 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1159516 00:23:44.923 [2024-05-15 03:18:16.062776] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:44.923 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1159516 00:23:45.182 00:23:45.182 real 0m17.000s 00:23:45.182 user 0m32.593s 00:23:45.182 sys 0m4.429s 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.182 ************************************ 00:23:45.182 END TEST nvmf_digest_error 00:23:45.182 ************************************ 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.182 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.182 rmmod nvme_tcp 00:23:45.182 rmmod nvme_fabrics 00:23:45.441 rmmod nvme_keyring 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1159516 ']' 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1159516 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1159516 ']' 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1159516 00:23:45.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1159516) - No such process 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1159516 is not found' 00:23:45.441 Process with pid 1159516 is not found 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.441 03:18:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.405 03:18:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.405 00:23:47.405 real 0m41.749s 00:23:47.405 user 1m7.065s 00:23:47.405 sys 0m12.796s 00:23:47.405 03:18:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:47.405 03:18:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:47.405 ************************************ 00:23:47.405 END TEST nvmf_digest 00:23:47.405 ************************************ 00:23:47.405 03:18:18 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:23:47.405 03:18:18 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:23:47.405 03:18:18 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:23:47.405 03:18:18 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:47.405 03:18:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:47.405 03:18:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:47.405 03:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.405 ************************************ 00:23:47.405 START TEST nvmf_bdevperf 00:23:47.405 ************************************ 00:23:47.405 03:18:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:47.666 * Looking for test storage... 00:23:47.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.666 03:18:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.941 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.941 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.941 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.941 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:23:52.941 00:23:52.941 --- 10.0.0.2 ping statistics --- 00:23:52.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.941 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:52.941 00:23:52.941 --- 10.0.0.1 ping statistics --- 00:23:52.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.941 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:52.941 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1166161 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1166161 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1166161 ']' 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.942 03:18:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:52.942 [2024-05-15 03:18:23.864976] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:52.942 [2024-05-15 03:18:23.865016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.942 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.942 [2024-05-15 03:18:23.920979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:52.942 [2024-05-15 03:18:24.001297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.942 [2024-05-15 03:18:24.001331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.942 [2024-05-15 03:18:24.001338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.942 [2024-05-15 03:18:24.001346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.942 [2024-05-15 03:18:24.001351] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.942 [2024-05-15 03:18:24.001392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.942 [2024-05-15 03:18:24.001483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.942 [2024-05-15 03:18:24.001485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 [2024-05-15 03:18:24.716838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 Malloc0 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 [2024-05-15 03:18:24.789004] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:53.878 [2024-05-15 03:18:24.789218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.878 03:18:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.879 { 00:23:53.879 "params": { 00:23:53.879 "name": "Nvme$subsystem", 00:23:53.879 "trtype": "$TEST_TRANSPORT", 00:23:53.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.879 "adrfam": "ipv4", 00:23:53.879 "trsvcid": "$NVMF_PORT", 00:23:53.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.879 "hdgst": ${hdgst:-false}, 00:23:53.879 "ddgst": ${ddgst:-false} 00:23:53.879 }, 00:23:53.879 "method": "bdev_nvme_attach_controller" 00:23:53.879 } 00:23:53.879 EOF 00:23:53.879 )") 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:53.879 03:18:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.879 "params": { 00:23:53.879 "name": "Nvme1", 00:23:53.879 "trtype": "tcp", 00:23:53.879 "traddr": "10.0.0.2", 00:23:53.879 "adrfam": "ipv4", 00:23:53.879 "trsvcid": "4420", 00:23:53.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.879 "hdgst": false, 00:23:53.879 "ddgst": false 00:23:53.879 }, 00:23:53.879 "method": "bdev_nvme_attach_controller" 00:23:53.879 }' 00:23:53.879 [2024-05-15 03:18:24.840116] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:53.879 [2024-05-15 03:18:24.840160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166406 ] 00:23:53.879 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.879 [2024-05-15 03:18:24.894089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.879 [2024-05-15 03:18:24.966921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.137 Running I/O for 1 seconds... 00:23:55.514 00:23:55.514 Latency(us) 00:23:55.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:55.514 Verification LBA range: start 0x0 length 0x4000 00:23:55.514 Nvme1n1 : 1.01 10798.49 42.18 0.00 0.00 11807.12 1118.39 16526.47 00:23:55.514 =================================================================================================================== 00:23:55.514 Total : 10798.49 42.18 0.00 0.00 11807.12 1118.39 16526.47 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1166639 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:55.514 { 00:23:55.514 "params": { 00:23:55.514 "name": "Nvme$subsystem", 00:23:55.514 "trtype": "$TEST_TRANSPORT", 00:23:55.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.514 "adrfam": "ipv4", 00:23:55.514 "trsvcid": "$NVMF_PORT", 00:23:55.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.514 "hdgst": ${hdgst:-false}, 00:23:55.514 "ddgst": ${ddgst:-false} 00:23:55.514 }, 00:23:55.514 "method": "bdev_nvme_attach_controller" 00:23:55.514 } 00:23:55.514 EOF 00:23:55.514 )") 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:55.514 03:18:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:55.514 "params": { 00:23:55.514 "name": "Nvme1", 00:23:55.514 "trtype": "tcp", 00:23:55.514 "traddr": "10.0.0.2", 00:23:55.514 "adrfam": "ipv4", 00:23:55.514 "trsvcid": "4420", 00:23:55.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.514 "hdgst": false, 00:23:55.514 "ddgst": false 00:23:55.514 }, 00:23:55.514 "method": "bdev_nvme_attach_controller" 00:23:55.514 }' 00:23:55.514 [2024-05-15 03:18:26.552853] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:23:55.514 [2024-05-15 03:18:26.552901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166639 ] 00:23:55.514 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.514 [2024-05-15 03:18:26.607540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.772 [2024-05-15 03:18:26.679251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.772 Running I/O for 15 seconds... 00:23:59.061 03:18:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1166161 00:23:59.061 03:18:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:59.061 [2024-05-15 03:18:29.524800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.524988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.524995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.061 [2024-05-15 03:18:29.525448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.061 [2024-05-15 03:18:29.525454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.525644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.525658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.525993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.525999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.062 [2024-05-15 03:18:29.526636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.062 [2024-05-15 03:18:29.526692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.062 [2024-05-15 03:18:29.526700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.063 [2024-05-15 03:18:29.526821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0a50 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.526836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:59.063 [2024-05-15 03:18:29.526841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:59.063 [2024-05-15 03:18:29.526847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:23:59.063 [2024-05-15 03:18:29.526854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.063 [2024-05-15 03:18:29.526895] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c0a50 was disconnected and freed. reset controller. 00:23:59.063 [2024-05-15 03:18:29.529754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.529805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.530408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.530662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.530673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.530681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.530862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.531042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.531049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.531056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.533928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.543071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.543458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.543645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.543655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.543663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.543842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.544022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.544030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.544040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.546852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.556069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.556489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.556742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.556752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.556758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.556923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.557088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.557095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.557101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.559837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.569038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.569512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.569729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.569759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.569780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.570372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.570637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.570646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.570653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.573394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.581951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.582425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.582674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.582685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.582692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.582867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.583041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.583049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.583055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.585771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.594840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.595312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.595547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.595581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.595602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.596189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.596703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.596712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.596718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.599426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.607681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.608075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.608218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.608228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.608234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.608408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.608589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.608598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.608604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.611319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.620655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.621085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.621360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.621391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.621412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.621707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.621882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.621890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.621896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.624635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.633594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.633979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.634155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.634164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.634171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.634336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.634523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.634531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.634538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.637313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.646595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.646963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.647137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.647147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.647154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.647328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.647509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.647517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.647523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.650234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.659512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.659945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.660165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.660196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.660217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.660502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.660677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.660685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.660691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.663406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.672599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.673075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.673322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.673353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.673373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.673858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.674038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.674046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.674052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.676811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.063 [2024-05-15 03:18:29.685472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.063 [2024-05-15 03:18:29.685866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.686109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.063 [2024-05-15 03:18:29.686119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.063 [2024-05-15 03:18:29.686126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.063 [2024-05-15 03:18:29.686300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.063 [2024-05-15 03:18:29.686481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.063 [2024-05-15 03:18:29.686490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.063 [2024-05-15 03:18:29.686496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.063 [2024-05-15 03:18:29.689202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.698541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.698904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.699152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.699162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.699168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.699342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.699539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.699548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.699554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.702379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.711397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.711880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.712056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.712066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.712073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.712248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.712423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.712430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.712436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.715153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.724268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.724628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.724845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.724855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.724861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.725026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.725191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.725199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.725205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.727921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.737218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.737665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.737906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.737915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.737922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.738086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.738251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.738258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.738264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.740992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.750180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.750624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.750743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.750752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.750762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.750927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.751092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.751099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.751105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.754090] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.763152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.763610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.763830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.763840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.763847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.764011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.764175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.764183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.764188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.766986] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.776195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.776668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.776891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.776922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.776943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.777544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.778130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.778139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.778145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.781017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.789384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.789773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.790030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.790060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.790082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.790647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.790828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.790837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.790843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.793716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.802521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.802865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.803085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.803095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.803102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.803276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.803453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.803461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.803472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.806241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.815557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.816018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.816257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.816287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.816309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.816908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.817246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.817254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.817260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.820037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.828587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.829036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.829255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.829266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.829273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.829446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.829629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.829637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.829644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.832422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.841661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.842059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.842282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.842292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.842298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.842479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.842654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.842662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.842668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.845500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.854737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.855149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.855336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.855366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.855386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.855978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.856155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.856164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.856170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.859062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.867887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.868329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.868583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.868617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.868639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.869038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.869218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.869229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.869236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.872074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.880838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.881157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.881385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.881415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.881437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.882005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.882181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.882189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.882195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.884915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.893688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.893994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.894168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.894197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.894218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.894817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.895366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.895374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.895380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.898141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.906748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.907060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.907236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.907246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.907253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.907427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.907608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.907617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.907626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.910403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.919694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.920017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.920137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.920148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.920154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.920329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.920508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.064 [2024-05-15 03:18:29.920517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.064 [2024-05-15 03:18:29.920522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.064 [2024-05-15 03:18:29.923233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.064 [2024-05-15 03:18:29.932678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.064 [2024-05-15 03:18:29.933091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.933303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.064 [2024-05-15 03:18:29.933334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.064 [2024-05-15 03:18:29.933356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.064 [2024-05-15 03:18:29.933953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.064 [2024-05-15 03:18:29.934495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.934504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.934510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:29.937333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:29.945701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:29.946025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.946202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.946211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:29.946218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:29.946392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:29.946571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.946579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.946585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:29.949307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:29.958757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:29.959139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.959314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.959324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:29.959330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:29.959948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:29.960219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.960227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.960233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:29.962992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:29.971807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:29.972184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.972350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.972381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:29.972402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:29.973007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:29.973305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.973314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.973320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:29.976113] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:29.984847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:29.985280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.985374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.985384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:29.985391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:29.985571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:29.985746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.985754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.985760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:29.988550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:29.997836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:29.998203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.998326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:29.998336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:29.998343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:29.998524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:29.998698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:29.998706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:29.998712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.001567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.011125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.011509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.011643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.011654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.011662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.011843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.012024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.012033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.012040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.014926] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.024350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.024742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.024915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.024926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.024934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.025116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.025299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.025309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.025315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.028441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.037567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.037971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.038090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.038100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.038108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.038288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.038474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.038483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.038490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.041359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.050679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.051080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.051312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.051323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.051330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.051515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.051696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.051704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.051710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.054585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.064263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.064663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.064859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.064870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.064878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.065076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.065274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.065283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.065290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.068167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.077527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.077844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.078101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.078112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.078119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.078298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.078484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.078492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.078499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.081363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.090682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.091071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.091292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.091302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.091310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.091493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.091674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.091682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.091688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.094556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.103870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.104254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.104435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.104445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.104452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.104637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.104816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.104825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.104831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.107704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.117023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.117403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.117589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.117600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.117611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.117790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.117970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.117978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.117984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.120858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.130177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.130627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.130788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.130817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.130838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.131423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.131736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.131745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.131751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.134622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.143264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.143679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.143857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.143868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.143875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.144053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.144234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.144242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.144249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.147123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.156447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.156758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.157008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.157039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.157060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.157629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.157810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.157818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.157824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.160694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.169671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.170044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.170262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.170272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.170279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.065 [2024-05-15 03:18:30.170461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.065 [2024-05-15 03:18:30.170648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.065 [2024-05-15 03:18:30.170656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.065 [2024-05-15 03:18:30.170663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.065 [2024-05-15 03:18:30.173534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.065 [2024-05-15 03:18:30.182850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.065 [2024-05-15 03:18:30.183233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.183358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.065 [2024-05-15 03:18:30.183368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.065 [2024-05-15 03:18:30.183375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.066 [2024-05-15 03:18:30.183558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.066 [2024-05-15 03:18:30.183747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.066 [2024-05-15 03:18:30.183756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.066 [2024-05-15 03:18:30.183762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.066 [2024-05-15 03:18:30.186630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.066 [2024-05-15 03:18:30.195940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.066 [2024-05-15 03:18:30.196235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.066 [2024-05-15 03:18:30.196461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.066 [2024-05-15 03:18:30.196477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.066 [2024-05-15 03:18:30.196484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.066 [2024-05-15 03:18:30.196667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.066 [2024-05-15 03:18:30.196847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.066 [2024-05-15 03:18:30.196856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.066 [2024-05-15 03:18:30.196862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.066 [2024-05-15 03:18:30.199738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.066 [2024-05-15 03:18:30.209065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.066 [2024-05-15 03:18:30.209426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.066 [2024-05-15 03:18:30.209615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.066 [2024-05-15 03:18:30.209626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.066 [2024-05-15 03:18:30.209633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.066 [2024-05-15 03:18:30.209813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.066 [2024-05-15 03:18:30.209993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.066 [2024-05-15 03:18:30.210002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.066 [2024-05-15 03:18:30.210009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.066 [2024-05-15 03:18:30.212878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.222218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.222532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.222787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.222796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.222803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.222982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.223163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.223171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.223178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.226057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.235371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.235650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.235896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.235906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.235913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.236091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.236277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.236285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.236291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.239166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.248471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.248846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.248944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.248954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.248961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.249141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.249320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.249329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.249335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.252199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.261675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.262113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.262398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.262419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.262884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.263065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.263073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.263079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.265949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.274765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.275214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.275506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.275539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.275560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.276133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.276314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.276322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.276333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.279198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.287845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.288322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.288565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.288597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.288619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.289025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.289204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.289212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.289218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.292084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.301051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.301496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.301791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.301821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.301842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.302113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.302292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.302300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.302307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.305182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.314144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.314597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.314820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.314830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.314837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.315016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.315197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.315205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.315214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.318086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.326 [2024-05-15 03:18:30.327300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.326 [2024-05-15 03:18:30.327751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.327945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.326 [2024-05-15 03:18:30.327955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.326 [2024-05-15 03:18:30.327962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.326 [2024-05-15 03:18:30.328141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.326 [2024-05-15 03:18:30.328321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.326 [2024-05-15 03:18:30.328329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.326 [2024-05-15 03:18:30.328335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.326 [2024-05-15 03:18:30.331207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.340560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.341010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.341184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.341194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.341201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.341380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.341567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.341575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.341581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.344451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.353765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.354199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.354452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.354494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.354516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.355101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.355587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.355596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.355602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.358473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.366934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.367344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.367520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.367531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.367538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.367718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.367898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.367906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.367912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.370787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.380100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.380575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.380887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.380917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.380939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.381307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.381494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.381502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.381509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.384374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.393336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.393751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.393969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.393979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.393986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.394166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.394346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.394354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.394360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.397232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.406536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.406957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.407175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.407204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.407225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.407587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.407767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.407775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.407781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.410647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.419594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.420040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.420236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.420246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.420252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.420431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.420616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.420625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.420631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.423499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.432766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.433203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.433484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.433517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.433538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.433832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.434011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.434019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.434026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.436863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.446005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.446441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.446626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.446637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.446644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.446824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.447005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.327 [2024-05-15 03:18:30.447013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.327 [2024-05-15 03:18:30.447019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.327 [2024-05-15 03:18:30.449887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.327 [2024-05-15 03:18:30.459198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.327 [2024-05-15 03:18:30.459634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.459901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.327 [2024-05-15 03:18:30.459911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.327 [2024-05-15 03:18:30.459918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.327 [2024-05-15 03:18:30.460098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.327 [2024-05-15 03:18:30.460278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.328 [2024-05-15 03:18:30.460286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.328 [2024-05-15 03:18:30.460292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.328 [2024-05-15 03:18:30.463164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.328 [2024-05-15 03:18:30.472298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.328 [2024-05-15 03:18:30.472713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.328 [2024-05-15 03:18:30.472955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.328 [2024-05-15 03:18:30.472965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.328 [2024-05-15 03:18:30.472972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.328 [2024-05-15 03:18:30.473152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.328 [2024-05-15 03:18:30.473332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.328 [2024-05-15 03:18:30.473340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.328 [2024-05-15 03:18:30.473347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.328 [2024-05-15 03:18:30.476218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.328 [2024-05-15 03:18:30.485548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.328 [2024-05-15 03:18:30.485985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.328 [2024-05-15 03:18:30.486250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.328 [2024-05-15 03:18:30.486260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.328 [2024-05-15 03:18:30.486271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.328 [2024-05-15 03:18:30.486451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.588 [2024-05-15 03:18:30.486638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.588 [2024-05-15 03:18:30.486647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.588 [2024-05-15 03:18:30.486654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.588 [2024-05-15 03:18:30.489493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.588 [2024-05-15 03:18:30.498775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.588 [2024-05-15 03:18:30.499190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.499374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.499384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.588 [2024-05-15 03:18:30.499391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.588 [2024-05-15 03:18:30.499576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.588 [2024-05-15 03:18:30.499758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.588 [2024-05-15 03:18:30.499766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.588 [2024-05-15 03:18:30.499772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.588 [2024-05-15 03:18:30.502644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.588 [2024-05-15 03:18:30.511945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.588 [2024-05-15 03:18:30.512301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.512550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.512583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.588 [2024-05-15 03:18:30.512605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.588 [2024-05-15 03:18:30.513192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.588 [2024-05-15 03:18:30.513507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.588 [2024-05-15 03:18:30.513515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.588 [2024-05-15 03:18:30.513521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.588 [2024-05-15 03:18:30.516386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.588 [2024-05-15 03:18:30.525017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.588 [2024-05-15 03:18:30.525445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.525627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.525639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.588 [2024-05-15 03:18:30.525648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.588 [2024-05-15 03:18:30.525828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.588 [2024-05-15 03:18:30.526008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.588 [2024-05-15 03:18:30.526016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.588 [2024-05-15 03:18:30.526022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.588 [2024-05-15 03:18:30.528891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.588 [2024-05-15 03:18:30.538019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.588 [2024-05-15 03:18:30.538390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.538562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.588 [2024-05-15 03:18:30.538576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.588 [2024-05-15 03:18:30.538582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.588 [2024-05-15 03:18:30.538757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.588 [2024-05-15 03:18:30.538932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.588 [2024-05-15 03:18:30.538940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.538946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.541820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.551137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.551560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.551812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.551842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.551865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.552453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.552784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.552793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.552799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.555451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.564109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.564535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.564831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.564861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.564882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.565119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.565284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.565291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.565297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.568025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.577058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.577529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.577853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.577883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.577905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.578201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.578366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.578374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.578379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.581105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.589966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.590379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.590577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.590589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.590595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.590770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.590944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.590952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.590958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.593674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.602891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.603300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.603519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.603529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.603536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.603710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.603888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.603896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.603902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.606618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.615837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.616298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.616508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.616539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.616561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.616828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.617003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.617011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.617017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.619726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.628829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.629252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.629492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.629502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.629509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.629683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.629857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.629865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.629871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.632586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.641819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.642252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.642515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.642548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.642571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.642832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.643007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.643019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.643025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.645798] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.654777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.655259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.655513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.655545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.655567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.589 [2024-05-15 03:18:30.655948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.589 [2024-05-15 03:18:30.656114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.589 [2024-05-15 03:18:30.656122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.589 [2024-05-15 03:18:30.656127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.589 [2024-05-15 03:18:30.658931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.589 [2024-05-15 03:18:30.667753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.589 [2024-05-15 03:18:30.668177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.668483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.589 [2024-05-15 03:18:30.668515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.589 [2024-05-15 03:18:30.668537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.669116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.669290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.669298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.669304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.672024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.680814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.681241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.681517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.681549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.681570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.682142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.682317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.682325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.682335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.685054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.693671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.694125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.694414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.694444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.694478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.695067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.695333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.695341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.695347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.698110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.706507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.706964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.707236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.707267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.707288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.707501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.707677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.707685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.707691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.710385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.719476] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.719925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.720179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.720209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.720230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.720831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.721336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.721344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.721349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.724061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.732476] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.732905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.733073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.733083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.733090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.733263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.733437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.733445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.733451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.590 [2024-05-15 03:18:30.736165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.590 [2024-05-15 03:18:30.745632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.590 [2024-05-15 03:18:30.746085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.746209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.590 [2024-05-15 03:18:30.746219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.590 [2024-05-15 03:18:30.746226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.590 [2024-05-15 03:18:30.746405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.590 [2024-05-15 03:18:30.746591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.590 [2024-05-15 03:18:30.746600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.590 [2024-05-15 03:18:30.746606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.749477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.758594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.759025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.759248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.759258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.759265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.759440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.759622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.759631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.759637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.762349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.771503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.771949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.772237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.772267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.772289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.772640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.772821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.772829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.772835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.775596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.784397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.784847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.785170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.785201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.785222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.785824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.786354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.786363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.786368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.789084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.797577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.797948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.798154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.798183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.798204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.798802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.799378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.799386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.799392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.802256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.810578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.811053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.811378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.811408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.811430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.811732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.811989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.812000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.812008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.816122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.824205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.824636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.824881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.824916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.824939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.825512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.825687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.825695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.825701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.828483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.837316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.837756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.838035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.838065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.851 [2024-05-15 03:18:30.838087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.851 [2024-05-15 03:18:30.838431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.851 [2024-05-15 03:18:30.838611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.851 [2024-05-15 03:18:30.838619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.851 [2024-05-15 03:18:30.838625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.851 [2024-05-15 03:18:30.841344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.851 [2024-05-15 03:18:30.850289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.851 [2024-05-15 03:18:30.850731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.850903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.851 [2024-05-15 03:18:30.850918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.850925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.851098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.851273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.851280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.851286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.854041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.863196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.863625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.863879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.863889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.863896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.864070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.864245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.864253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.864259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.867016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.876269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.876725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.876895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.876905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.876912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.877091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.877279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.877287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.877293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.880113] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.889287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.889727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.889996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.890005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.890016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.890189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.890364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.890372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.890379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.893205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.902197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.902554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.902796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.902806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.902813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.902987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.903162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.903170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.903177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.905902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.915088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.915447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.915625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.915635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.915642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.915816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.915990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.915998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.916004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.918720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.927982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.928429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.928736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.928767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.928788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.929005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.929180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.929188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.929195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.931920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.941046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.941486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.941765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.941794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.941816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.942401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.942862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.942870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.942876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.945585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.954147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.954573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.954841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.954851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.954880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.955478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.955713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.852 [2024-05-15 03:18:30.955726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.852 [2024-05-15 03:18:30.955733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.852 [2024-05-15 03:18:30.958570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.852 [2024-05-15 03:18:30.967137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.852 [2024-05-15 03:18:30.967551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.967817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.852 [2024-05-15 03:18:30.967826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.852 [2024-05-15 03:18:30.967833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.852 [2024-05-15 03:18:30.967997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.852 [2024-05-15 03:18:30.968165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.853 [2024-05-15 03:18:30.968173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.853 [2024-05-15 03:18:30.968178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.853 [2024-05-15 03:18:30.970900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.853 [2024-05-15 03:18:30.980026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.853 [2024-05-15 03:18:30.980470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:30.980676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:30.980706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.853 [2024-05-15 03:18:30.980728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.853 [2024-05-15 03:18:30.981313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.853 [2024-05-15 03:18:30.981873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.853 [2024-05-15 03:18:30.981882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.853 [2024-05-15 03:18:30.981888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.853 [2024-05-15 03:18:30.984601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.853 [2024-05-15 03:18:30.993011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.853 [2024-05-15 03:18:30.993431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:30.993705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:30.993736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.853 [2024-05-15 03:18:30.993757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.853 [2024-05-15 03:18:30.994130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.853 [2024-05-15 03:18:30.994309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.853 [2024-05-15 03:18:30.994317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.853 [2024-05-15 03:18:30.994324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.853 [2024-05-15 03:18:30.997130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.853 [2024-05-15 03:18:31.006136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.853 [2024-05-15 03:18:31.006489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:31.006733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.853 [2024-05-15 03:18:31.006743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:23:59.853 [2024-05-15 03:18:31.006750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:23:59.853 [2024-05-15 03:18:31.006930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:23:59.853 [2024-05-15 03:18:31.007109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.853 [2024-05-15 03:18:31.007120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.853 [2024-05-15 03:18:31.007126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.853 [2024-05-15 03:18:31.009992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.113 [2024-05-15 03:18:31.019293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.113 [2024-05-15 03:18:31.019714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.019952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.019961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.113 [2024-05-15 03:18:31.019968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.113 [2024-05-15 03:18:31.020142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.113 [2024-05-15 03:18:31.020317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.113 [2024-05-15 03:18:31.020325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.113 [2024-05-15 03:18:31.020331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.113 [2024-05-15 03:18:31.023109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.113 [2024-05-15 03:18:31.032144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.113 [2024-05-15 03:18:31.032510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.032760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.032790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.113 [2024-05-15 03:18:31.032811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.113 [2024-05-15 03:18:31.033397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.113 [2024-05-15 03:18:31.033999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.113 [2024-05-15 03:18:31.034025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.113 [2024-05-15 03:18:31.034049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.113 [2024-05-15 03:18:31.038164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.113 [2024-05-15 03:18:31.045781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.113 [2024-05-15 03:18:31.046269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.046489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.046521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.113 [2024-05-15 03:18:31.046543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.113 [2024-05-15 03:18:31.046886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.113 [2024-05-15 03:18:31.047061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.113 [2024-05-15 03:18:31.047070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.113 [2024-05-15 03:18:31.047079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.113 [2024-05-15 03:18:31.049968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.113 [2024-05-15 03:18:31.058928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.113 [2024-05-15 03:18:31.059323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.059489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.113 [2024-05-15 03:18:31.059522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.113 [2024-05-15 03:18:31.059544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.113 [2024-05-15 03:18:31.059925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.113 [2024-05-15 03:18:31.060100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.113 [2024-05-15 03:18:31.060108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.113 [2024-05-15 03:18:31.060115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.113 [2024-05-15 03:18:31.062883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.113 [2024-05-15 03:18:31.071870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.072311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.072550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.072561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.072569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.072749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.072928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.072937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.072943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.075700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.084855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.085285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.085518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.085551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.085573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.085980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.086154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.086163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.086168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.088946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.097783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.098203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.098452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.098462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.098475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.098649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.098846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.098854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.098860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.101646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.110820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.111248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.111446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.111456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.111463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.111648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.111829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.111837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.111843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.114665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.123736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.124161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.124356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.124365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.124372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.124570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.124759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.124768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.124773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.127508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.136667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.137088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.137333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.137342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.137349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.137536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.137710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.137718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.137724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.140515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.149633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.150054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.150325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.150355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.150376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.150973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.151235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.151243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.151249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.154055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.162472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.162935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.163189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.163219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.163240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.163839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.164322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.164330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.164336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.167072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.175500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.175939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.176192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.176220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.176243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.176843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.114 [2024-05-15 03:18:31.177343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.114 [2024-05-15 03:18:31.177351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.114 [2024-05-15 03:18:31.177357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.114 [2024-05-15 03:18:31.180111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.114 [2024-05-15 03:18:31.188436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.114 [2024-05-15 03:18:31.188873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.189102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.114 [2024-05-15 03:18:31.189132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.114 [2024-05-15 03:18:31.189153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.114 [2024-05-15 03:18:31.189630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.189805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.189813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.189819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.192530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.201352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.201801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.201992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.202002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.202008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.202181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.202356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.202364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.202370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.205129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.214286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.214704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.214941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.214955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.214961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.215135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.215309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.215318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.215324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.218053] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.227198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.227671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.227911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.227949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.227970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.228496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.228671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.228679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.228685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.231399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.240187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.240637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.240879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.240889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.240896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.241071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.241246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.241254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.241261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.244039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.253190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.253645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.253866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.253876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.253886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.254060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.254235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.254243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.254249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.257000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.115 [2024-05-15 03:18:31.266266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.115 [2024-05-15 03:18:31.266673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.266839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.115 [2024-05-15 03:18:31.266850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.115 [2024-05-15 03:18:31.266857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.115 [2024-05-15 03:18:31.267036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.115 [2024-05-15 03:18:31.267217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.115 [2024-05-15 03:18:31.267225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.115 [2024-05-15 03:18:31.267232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.115 [2024-05-15 03:18:31.270130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.279458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.279937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.280225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.280255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.375 [2024-05-15 03:18:31.280277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.375 [2024-05-15 03:18:31.280874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.375 [2024-05-15 03:18:31.281055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.375 [2024-05-15 03:18:31.281063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.375 [2024-05-15 03:18:31.281069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.375 [2024-05-15 03:18:31.283916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.292416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.292804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.292920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.292930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.375 [2024-05-15 03:18:31.292937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.375 [2024-05-15 03:18:31.293114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.375 [2024-05-15 03:18:31.293289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.375 [2024-05-15 03:18:31.293297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.375 [2024-05-15 03:18:31.293302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.375 [2024-05-15 03:18:31.296027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.305487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.305867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.305995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.306005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.375 [2024-05-15 03:18:31.306012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.375 [2024-05-15 03:18:31.306192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.375 [2024-05-15 03:18:31.306372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.375 [2024-05-15 03:18:31.306380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.375 [2024-05-15 03:18:31.306386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.375 [2024-05-15 03:18:31.309458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.318455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.318851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.318963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.318973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.375 [2024-05-15 03:18:31.318980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.375 [2024-05-15 03:18:31.319153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.375 [2024-05-15 03:18:31.319328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.375 [2024-05-15 03:18:31.319336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.375 [2024-05-15 03:18:31.319342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.375 [2024-05-15 03:18:31.322067] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.331361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.331789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.331965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.331995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.375 [2024-05-15 03:18:31.332017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.375 [2024-05-15 03:18:31.332619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.375 [2024-05-15 03:18:31.332893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.375 [2024-05-15 03:18:31.332900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.375 [2024-05-15 03:18:31.332906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.375 [2024-05-15 03:18:31.335625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.375 [2024-05-15 03:18:31.344250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.375 [2024-05-15 03:18:31.344664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.344825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.375 [2024-05-15 03:18:31.344834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.344840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.345006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.345170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.345178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.345184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.347945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.357302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.357725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.357907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.357938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.357962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.358573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.358749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.358757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.358763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.361519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.370358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.374485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.374633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.374646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.374654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.374835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.375009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.375021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.375027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.377880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.383471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.383869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.383989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.383999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.384006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.384185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.384365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.384373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.384380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.387194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.396481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.396869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.397130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.397162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.397183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.397782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.398111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.398123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.398132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.402248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.409952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.410452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.410705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.410736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.410757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.411068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.411243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.411251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.411260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.414019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.422892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.423262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.423485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.423495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.423502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.423676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.423851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.423860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.423866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.426578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.435905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.436280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.436453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.436463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.436475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.436669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.436858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.436866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.436871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.439593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.448866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.449178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.449305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.449315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.449321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.449499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.449675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.449683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.376 [2024-05-15 03:18:31.449688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.376 [2024-05-15 03:18:31.452407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.376 [2024-05-15 03:18:31.461851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.376 [2024-05-15 03:18:31.462198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.462389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.376 [2024-05-15 03:18:31.462399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.376 [2024-05-15 03:18:31.462406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.376 [2024-05-15 03:18:31.462587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.376 [2024-05-15 03:18:31.462761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.376 [2024-05-15 03:18:31.462769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.462775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.465514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.377 [2024-05-15 03:18:31.474889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.377 [2024-05-15 03:18:31.475242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.475443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.475452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.377 [2024-05-15 03:18:31.475459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.377 [2024-05-15 03:18:31.475670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.377 [2024-05-15 03:18:31.475845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.377 [2024-05-15 03:18:31.475854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.475860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.478646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.377 [2024-05-15 03:18:31.487913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.377 [2024-05-15 03:18:31.488198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.488441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.488450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.377 [2024-05-15 03:18:31.488457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.377 [2024-05-15 03:18:31.488635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.377 [2024-05-15 03:18:31.488810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.377 [2024-05-15 03:18:31.488818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.488824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.491545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.377 [2024-05-15 03:18:31.500840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.377 [2024-05-15 03:18:31.501176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.501386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.501416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.377 [2024-05-15 03:18:31.501437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.377 [2024-05-15 03:18:31.502036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.377 [2024-05-15 03:18:31.502360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.377 [2024-05-15 03:18:31.502368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.502374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.505094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.377 [2024-05-15 03:18:31.513820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.377 [2024-05-15 03:18:31.514246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.514424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.514453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.377 [2024-05-15 03:18:31.514489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.377 [2024-05-15 03:18:31.515076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.377 [2024-05-15 03:18:31.515405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.377 [2024-05-15 03:18:31.515413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.515419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.518186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.377 [2024-05-15 03:18:31.526758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.377 [2024-05-15 03:18:31.527190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.527360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.377 [2024-05-15 03:18:31.527370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.377 [2024-05-15 03:18:31.527377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.377 [2024-05-15 03:18:31.527556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.377 [2024-05-15 03:18:31.527730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.377 [2024-05-15 03:18:31.527738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.377 [2024-05-15 03:18:31.527744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.377 [2024-05-15 03:18:31.530526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.637 [2024-05-15 03:18:31.539820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.637 [2024-05-15 03:18:31.540183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.540357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.540368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.637 [2024-05-15 03:18:31.540375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.637 [2024-05-15 03:18:31.540560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.637 [2024-05-15 03:18:31.540741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.637 [2024-05-15 03:18:31.540749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.637 [2024-05-15 03:18:31.540755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.637 [2024-05-15 03:18:31.543606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.637 [2024-05-15 03:18:31.552902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.637 [2024-05-15 03:18:31.553261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.553418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.553429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.637 [2024-05-15 03:18:31.553436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.637 [2024-05-15 03:18:31.553620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.637 [2024-05-15 03:18:31.553808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.637 [2024-05-15 03:18:31.553816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.637 [2024-05-15 03:18:31.553822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.637 [2024-05-15 03:18:31.556673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.637 [2024-05-15 03:18:31.566011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.637 [2024-05-15 03:18:31.566299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.566478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.566489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.637 [2024-05-15 03:18:31.566496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.637 [2024-05-15 03:18:31.566675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.637 [2024-05-15 03:18:31.566856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.637 [2024-05-15 03:18:31.566864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.637 [2024-05-15 03:18:31.566870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.637 [2024-05-15 03:18:31.569743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.637 [2024-05-15 03:18:31.579226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.637 [2024-05-15 03:18:31.579708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.579954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.579967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.637 [2024-05-15 03:18:31.579974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.637 [2024-05-15 03:18:31.580154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.637 [2024-05-15 03:18:31.580334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.637 [2024-05-15 03:18:31.580342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.637 [2024-05-15 03:18:31.580348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.637 [2024-05-15 03:18:31.583216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.637 [2024-05-15 03:18:31.592419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.637 [2024-05-15 03:18:31.592886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.637 [2024-05-15 03:18:31.593129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.593139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.593146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.593331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.593521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.593529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.593535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.596493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.605699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.606197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.606391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.606402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.606409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.606612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.606809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.606818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.606824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.609784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.619047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.619524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.619796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.619826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.619854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.620337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.620605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.620617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.620626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.624740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.632611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.632997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.633244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.633254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.633261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.633440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.633625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.633634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.633640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.636511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.645775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.646169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.646485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.646516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.646538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.647123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.647422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.647430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.647436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.650267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.658735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.659200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.659367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.659376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.659383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.659565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.659741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.659749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.659755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.662509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.671630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.672004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.672217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.672246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.672269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.672680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.672856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.672864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.672870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.675646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.684696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.685160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.685372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.685402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.685423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.685797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.685972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.685980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.685986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.688726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.697663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.698111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.698354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.698364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.698370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.698561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.698739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.698747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.638 [2024-05-15 03:18:31.698753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.638 [2024-05-15 03:18:31.701462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.638 [2024-05-15 03:18:31.710607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.638 [2024-05-15 03:18:31.710938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.711181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.638 [2024-05-15 03:18:31.711191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.638 [2024-05-15 03:18:31.711197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.638 [2024-05-15 03:18:31.711362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.638 [2024-05-15 03:18:31.711551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.638 [2024-05-15 03:18:31.711560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.711565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.714274] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.723540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.724003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.724289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.724318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.724339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.724683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.724858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.724866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.724871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.727586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.736397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.736777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.737018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.737028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.737035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.737209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.737383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.737393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.737399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.740110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.749287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.749739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.749957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.749987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.750008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.750608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.750890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.750899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.750904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.753859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.762186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.762643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.762879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.762910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.762932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.763534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.763835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.763843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.763849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.766565] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.775294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.775746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.775930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.775940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.775947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.776122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.776296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.776304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.776314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.779149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.639 [2024-05-15 03:18:31.788368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.639 [2024-05-15 03:18:31.788841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.789020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.639 [2024-05-15 03:18:31.789030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.639 [2024-05-15 03:18:31.789037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.639 [2024-05-15 03:18:31.789211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.639 [2024-05-15 03:18:31.789386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.639 [2024-05-15 03:18:31.789394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.639 [2024-05-15 03:18:31.789399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.639 [2024-05-15 03:18:31.792247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.801436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.801885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.802138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.802148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.802155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.802335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.802520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.802528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.802535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.805393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.814668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.815044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.815264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.815286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.815883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.816064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.816072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.816078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.818856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.827655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.828014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.828138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.828148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.828154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.828329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.828510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.828518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.828524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.831237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.840550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.841004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.841289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.841319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.841339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.841940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.842186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.842194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.842200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.844914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.853504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.853969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.854147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.854157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.854164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.854338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.854533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.854541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.854548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.857370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.866572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.867042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.867260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.867289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.867311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.867911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.868197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.868205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.868211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.870951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.879489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.879957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.880175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.880184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.880191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.880365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.880544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.880553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.880559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.883269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.892400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.892858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.893159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.893189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.893210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.893810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.894107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.900 [2024-05-15 03:18:31.894118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.900 [2024-05-15 03:18:31.894127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.900 [2024-05-15 03:18:31.898244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.900 [2024-05-15 03:18:31.905856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.900 [2024-05-15 03:18:31.906246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.906419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.900 [2024-05-15 03:18:31.906429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.900 [2024-05-15 03:18:31.906435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.900 [2024-05-15 03:18:31.906614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.900 [2024-05-15 03:18:31.906789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.906797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.906803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.909558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.918763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.919131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.919412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.919441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.919462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.920065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.920261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.920269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.920275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.922992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.931641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.932109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.932353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.932362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.932369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.932548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.932723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.932731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.932737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.935452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.944600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.945068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.945187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.945201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.945208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.945382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.945562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.945571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.945577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.948293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.957558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.958025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.958202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.958232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.958253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.958863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.959339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.959347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.959353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.962069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.970403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.970880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.971173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.971203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.971231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.971404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.971602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.971611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.971617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.974421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.983571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.984032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.984318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.984348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.984376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.984628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.984808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.984816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.984822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:31.987652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:31.996403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:31.996875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.997121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:31.997130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:31.997136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:31.997301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:31.997472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:31.997480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:31.997486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:32.000205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:32.009419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:32.009866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:32.010040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:32.010050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:32.010057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:32.010231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:32.010406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:32.010414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:32.010421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:32.013208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:32.022269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:32.022751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:32.023037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:32.023067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.901 [2024-05-15 03:18:32.023100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.901 [2024-05-15 03:18:32.023267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.901 [2024-05-15 03:18:32.023432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.901 [2024-05-15 03:18:32.023440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.901 [2024-05-15 03:18:32.023445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.901 [2024-05-15 03:18:32.026179] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.901 [2024-05-15 03:18:32.035185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.901 [2024-05-15 03:18:32.035610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.901 [2024-05-15 03:18:32.035849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.902 [2024-05-15 03:18:32.035879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.902 [2024-05-15 03:18:32.035900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.902 [2024-05-15 03:18:32.036498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.902 [2024-05-15 03:18:32.036721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.902 [2024-05-15 03:18:32.036730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.902 [2024-05-15 03:18:32.036736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.902 [2024-05-15 03:18:32.039454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.902 [2024-05-15 03:18:32.048253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.902 [2024-05-15 03:18:32.048743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.902 [2024-05-15 03:18:32.049009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.902 [2024-05-15 03:18:32.049038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:00.902 [2024-05-15 03:18:32.049059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:00.902 [2024-05-15 03:18:32.049442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:00.902 [2024-05-15 03:18:32.049622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.902 [2024-05-15 03:18:32.049631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.902 [2024-05-15 03:18:32.049636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.902 [2024-05-15 03:18:32.052345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.162 [2024-05-15 03:18:32.061498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.162 [2024-05-15 03:18:32.061912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.162 [2024-05-15 03:18:32.062121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.162 [2024-05-15 03:18:32.062131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.162 [2024-05-15 03:18:32.062138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.162 [2024-05-15 03:18:32.062311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.162 [2024-05-15 03:18:32.062512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.162 [2024-05-15 03:18:32.062521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.162 [2024-05-15 03:18:32.062527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.162 [2024-05-15 03:18:32.065419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.162 [2024-05-15 03:18:32.074648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.162 [2024-05-15 03:18:32.075118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.162 [2024-05-15 03:18:32.075289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.162 [2024-05-15 03:18:32.075299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.162 [2024-05-15 03:18:32.075306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.162 [2024-05-15 03:18:32.075484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.162 [2024-05-15 03:18:32.075679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.162 [2024-05-15 03:18:32.075687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.162 [2024-05-15 03:18:32.075693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.078518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.087473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.087954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.088088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.088117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.088138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.088577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.088751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.088759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.088765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.091478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.100344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.100791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.101081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.101112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.101133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.101673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.101848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.101858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.101864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.104575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.113248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.113802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.113811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.113818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.113983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.114149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.114156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.114162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.116893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.126328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.126775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.126979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.127010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.127031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.127446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.127626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.127635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.127641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.130392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.139373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.139711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.139905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.139916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.139922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.140096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.140269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.140277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.140287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.143006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.152292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.152742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.152968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.152999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.153020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.153618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.153943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.153952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.153958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.156697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.165336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.165817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.165987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.165999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.166006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.166184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.166378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.166387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.166394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.169286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.178430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.178853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.179095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.179126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.179148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.179746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.180049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.180058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.180064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.182936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.191344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.191673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.191917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.191927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.191934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.192108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.163 [2024-05-15 03:18:32.192282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.163 [2024-05-15 03:18:32.192289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.163 [2024-05-15 03:18:32.192295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.163 [2024-05-15 03:18:32.195061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.163 [2024-05-15 03:18:32.204444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.163 [2024-05-15 03:18:32.204913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.205161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.163 [2024-05-15 03:18:32.205191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.163 [2024-05-15 03:18:32.205212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.163 [2024-05-15 03:18:32.205817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.205992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.206000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.206006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.208722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.217376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.217841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.218068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.218098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.218119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.218720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.219164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.219172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.219178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.221894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.230240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.230705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.230878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.230908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.230929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.231529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.232112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.232120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.232127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.234839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.243191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.243635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.243824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.243855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.243877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.244463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.245065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.245089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.245108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.247887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.256078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.256522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.256829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.256859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.256880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.257111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.257276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.257283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.257289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.260020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.269000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.269461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.269766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.269797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.269819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.270128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.270308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.270316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.270322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.273163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.281967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.282446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.282752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.282782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.282803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.283000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.283175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.283182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.283188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.285915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.294798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.295224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.295462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.295507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.295529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.296115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.296342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.296350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.296356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.299092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.307708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.308156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.308390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.308427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.308449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.309060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.309235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.309242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.309248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.164 [2024-05-15 03:18:32.312003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.164 [2024-05-15 03:18:32.320955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.164 [2024-05-15 03:18:32.321404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.321613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.164 [2024-05-15 03:18:32.321646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.164 [2024-05-15 03:18:32.321667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.164 [2024-05-15 03:18:32.322087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.164 [2024-05-15 03:18:32.322267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.164 [2024-05-15 03:18:32.322275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.164 [2024-05-15 03:18:32.322281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.425 [2024-05-15 03:18:32.325132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.425 [2024-05-15 03:18:32.334016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.425 [2024-05-15 03:18:32.334488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.334776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.334806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.425 [2024-05-15 03:18:32.334828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.425 [2024-05-15 03:18:32.335057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.425 [2024-05-15 03:18:32.335221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.425 [2024-05-15 03:18:32.335229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.425 [2024-05-15 03:18:32.335234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.425 [2024-05-15 03:18:32.337963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.425 [2024-05-15 03:18:32.346840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.425 [2024-05-15 03:18:32.347289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.347575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.347608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.425 [2024-05-15 03:18:32.347636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.425 [2024-05-15 03:18:32.348221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.425 [2024-05-15 03:18:32.348479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.425 [2024-05-15 03:18:32.348487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.425 [2024-05-15 03:18:32.348493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.425 [2024-05-15 03:18:32.351146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.425 [2024-05-15 03:18:32.359765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.425 [2024-05-15 03:18:32.360206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.360346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.360355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.425 [2024-05-15 03:18:32.360362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.425 [2024-05-15 03:18:32.360549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.425 [2024-05-15 03:18:32.360725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.425 [2024-05-15 03:18:32.360733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.425 [2024-05-15 03:18:32.360739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.425 [2024-05-15 03:18:32.363453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.425 [2024-05-15 03:18:32.372708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.425 [2024-05-15 03:18:32.373152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.373377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.373387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.425 [2024-05-15 03:18:32.373394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.425 [2024-05-15 03:18:32.373584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.425 [2024-05-15 03:18:32.373760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.425 [2024-05-15 03:18:32.373768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.425 [2024-05-15 03:18:32.373774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.425 [2024-05-15 03:18:32.376556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.425 [2024-05-15 03:18:32.385673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.425 [2024-05-15 03:18:32.386186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.386397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.425 [2024-05-15 03:18:32.386427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.425 [2024-05-15 03:18:32.386448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.387057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.387356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.387364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.387369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.390140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.398707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.399177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.399423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.399452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.399487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.400058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.400233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.400241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.400247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.403029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.411612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.412017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.412267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.412297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.412318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.412917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.413442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.413450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.413456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.416166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.424481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.424898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.425072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.425082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.425088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.425252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.425420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.425428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.425433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.428166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.437324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.437767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.437971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.438001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.438022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.438406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.438586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.438594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.438600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.441310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.450144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.450576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.450843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.450853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.450859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.451034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.451208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.451216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.451222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.453944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.462976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.463368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.463611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.463622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.463629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.463803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.463978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.463989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.463995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.466739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.475914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.476262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.476427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.476437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.476443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.476643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.476832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.476840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.476846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.426 [2024-05-15 03:18:32.479618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.426 [2024-05-15 03:18:32.488753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.426 [2024-05-15 03:18:32.489216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.489530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.426 [2024-05-15 03:18:32.489563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.426 [2024-05-15 03:18:32.489584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.426 [2024-05-15 03:18:32.490171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.426 [2024-05-15 03:18:32.490500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.426 [2024-05-15 03:18:32.490511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.426 [2024-05-15 03:18:32.490520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.494633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.502559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.502977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.503214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.503240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.503263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 [2024-05-15 03:18:32.503813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.503989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.503997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.504006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.506761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.515496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.515932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.516120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.516130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.516136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 [2024-05-15 03:18:32.516310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.516492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.516517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.516523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.519248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1166161 Killed "${NVMF_APP[@]}" "$@" 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1167568 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1167568 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.427 [2024-05-15 03:18:32.528592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1167568 ']' 00:24:01.427 [2024-05-15 03:18:32.529019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.427 [2024-05-15 03:18:32.529156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.529167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.529173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.427 [2024-05-15 03:18:32.529353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.529539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.529548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.529555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.427 03:18:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 [2024-05-15 03:18:32.532425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.541748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.542202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.542423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.542434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.542440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 [2024-05-15 03:18:32.542626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.542807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.542815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.542821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.545711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.554856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.555308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.555549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.555560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.555567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 [2024-05-15 03:18:32.555740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.555914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.555922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.555928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.558774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.567900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.568239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.568391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.568401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.568408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.427 [2024-05-15 03:18:32.568606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.427 [2024-05-15 03:18:32.568787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.427 [2024-05-15 03:18:32.568796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.427 [2024-05-15 03:18:32.568807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.427 [2024-05-15 03:18:32.571674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.427 [2024-05-15 03:18:32.574361] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:01.427 [2024-05-15 03:18:32.574410] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.427 [2024-05-15 03:18:32.580981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.427 [2024-05-15 03:18:32.581439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.581631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.427 [2024-05-15 03:18:32.581642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.427 [2024-05-15 03:18:32.581650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.428 [2024-05-15 03:18:32.581830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.428 [2024-05-15 03:18:32.582011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.428 [2024-05-15 03:18:32.582019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.428 [2024-05-15 03:18:32.582027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.428 [2024-05-15 03:18:32.584902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.688 [2024-05-15 03:18:32.594183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.688 [2024-05-15 03:18:32.594655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.594880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.594890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.688 [2024-05-15 03:18:32.594897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.688 [2024-05-15 03:18:32.595073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.688 [2024-05-15 03:18:32.595248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.688 [2024-05-15 03:18:32.595256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.688 [2024-05-15 03:18:32.595262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.688 [2024-05-15 03:18:32.598115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.688 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.688 [2024-05-15 03:18:32.607336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.688 [2024-05-15 03:18:32.607813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.607988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.607998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.688 [2024-05-15 03:18:32.608005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.688 [2024-05-15 03:18:32.608186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.688 [2024-05-15 03:18:32.608369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.688 [2024-05-15 03:18:32.608378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.688 [2024-05-15 03:18:32.608384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.688 [2024-05-15 03:18:32.611225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.688 [2024-05-15 03:18:32.620503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.688 [2024-05-15 03:18:32.620931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.621178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.621188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.688 [2024-05-15 03:18:32.621196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.688 [2024-05-15 03:18:32.621376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.688 [2024-05-15 03:18:32.621565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.688 [2024-05-15 03:18:32.621574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.688 [2024-05-15 03:18:32.621581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.688 [2024-05-15 03:18:32.624449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.688 [2024-05-15 03:18:32.633252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.688 [2024-05-15 03:18:32.633606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.688 [2024-05-15 03:18:32.634060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.634318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.688 [2024-05-15 03:18:32.634328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.688 [2024-05-15 03:18:32.634335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.634521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.634702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.634711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.634717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.637738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.646742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.647125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.647312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.647322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.647330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.647517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.647698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.647710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.647717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.650586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.659907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.660360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.660529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.660541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.660548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.660729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.660909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.660917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.660923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.663796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.673062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.673438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.673688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.673700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.673707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.673887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.674068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.674076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.674082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.676953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.686296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.686701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.686828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.686840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.686847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.687027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.687209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.687218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.687229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.690128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.699445] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.699809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.699935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.699946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.699953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.700133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.700313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.700321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.700328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.703200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.712683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.712997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.713121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.713131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.713138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.713317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.713505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.713513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.713520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.714864] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.689 [2024-05-15 03:18:32.714891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.689 [2024-05-15 03:18:32.714898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.689 [2024-05-15 03:18:32.714904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.689 [2024-05-15 03:18:32.714909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.689 [2024-05-15 03:18:32.714947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.689 [2024-05-15 03:18:32.715035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.689 [2024-05-15 03:18:32.715036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.689 [2024-05-15 03:18:32.716391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.725887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.726319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.726511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.726522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.726530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.726711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.689 [2024-05-15 03:18:32.726892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.689 [2024-05-15 03:18:32.726900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.689 [2024-05-15 03:18:32.726907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.689 [2024-05-15 03:18:32.729781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.689 [2024-05-15 03:18:32.739103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.689 [2024-05-15 03:18:32.739569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.739755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.689 [2024-05-15 03:18:32.739765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.689 [2024-05-15 03:18:32.739773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.689 [2024-05-15 03:18:32.739954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.740134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.740142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.740150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.743020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.752342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.752736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.752992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.753005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.753013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.753196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.753377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.753386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.753393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.756272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.765494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.765899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.766118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.766128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.766142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.766321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.766507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.766516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.766524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.769388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.778708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.779198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.779393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.779403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.779411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.779597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.779778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.779786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.779794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.782668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.791800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.792203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.792448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.792458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.792471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.792650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.792831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.792839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.792846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.795723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.805034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.805421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.805645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.805656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.805663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.805845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.806025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.806033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.806039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.808909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.818215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.818672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.818851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.818862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.818869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.819048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.819228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.819236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.819242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.822114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.831440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.831762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.831959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.831969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.831976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.832156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.832336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.832344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.832350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.835225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.690 [2024-05-15 03:18:32.844541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.690 [2024-05-15 03:18:32.844972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.845088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.690 [2024-05-15 03:18:32.845098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.690 [2024-05-15 03:18:32.845105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.690 [2024-05-15 03:18:32.845284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.690 [2024-05-15 03:18:32.845473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.690 [2024-05-15 03:18:32.845482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.690 [2024-05-15 03:18:32.845488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.690 [2024-05-15 03:18:32.848355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.951 [2024-05-15 03:18:32.857665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.951 [2024-05-15 03:18:32.858055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.858300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.858311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.951 [2024-05-15 03:18:32.858317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.951 [2024-05-15 03:18:32.858509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.951 [2024-05-15 03:18:32.858690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.951 [2024-05-15 03:18:32.858698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.951 [2024-05-15 03:18:32.858704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.951 [2024-05-15 03:18:32.861573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.951 [2024-05-15 03:18:32.870889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.951 [2024-05-15 03:18:32.871212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.871432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.871443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.951 [2024-05-15 03:18:32.871449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.951 [2024-05-15 03:18:32.871634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.951 [2024-05-15 03:18:32.871815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.951 [2024-05-15 03:18:32.871823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.951 [2024-05-15 03:18:32.871829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.951 [2024-05-15 03:18:32.874701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.951 [2024-05-15 03:18:32.884021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.951 [2024-05-15 03:18:32.884451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.884655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.951 [2024-05-15 03:18:32.884667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.951 [2024-05-15 03:18:32.884674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.884853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.885034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.885045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.885051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.887926] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.897253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.897679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.897877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.897887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.897894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.898073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.898253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.898261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.898267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.901138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.910459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.910823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.911066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.911077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.911084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.911263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.911444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.911452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.911458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.914332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.923659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.923969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.924222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.924232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.924239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.924419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.924603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.924612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.924622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.927492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.936814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.937234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.937399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.937409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.937416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.937600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.937780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.937788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.937794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.940659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.949980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.950396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.950633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.950644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.950652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.950832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.951011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.951019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.951025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.953896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.963221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.963705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.963875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.963886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.963893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.964073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.964253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.964262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.964268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.967146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.976473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.976893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.977078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.977089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.977096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.977276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.977456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.977469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.977477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.980342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:32.989657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:32.990017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.990216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:32.990226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:32.990233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:32.990413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:32.990597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:32.990606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:32.990612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:32.993485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:33.002800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.952 [2024-05-15 03:18:33.003141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:33.003391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.952 [2024-05-15 03:18:33.003400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.952 [2024-05-15 03:18:33.003407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.952 [2024-05-15 03:18:33.003591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.952 [2024-05-15 03:18:33.003772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.952 [2024-05-15 03:18:33.003780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.952 [2024-05-15 03:18:33.003786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.952 [2024-05-15 03:18:33.006656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.952 [2024-05-15 03:18:33.015980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.016402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.016623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.016633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.016640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.016820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.017001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.017009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.017015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.019887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.029211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.029655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.029827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.029837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.029844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.030024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.030204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.030212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.030218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.033088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.042397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.042832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.043054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.043065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.043071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.043250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.043430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.043438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.043445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.046313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.055620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.056007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.056252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.056262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.056269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.056447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.056631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.056640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.056646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.059523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.068832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.069346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.069613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.069624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.069630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.069810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.069989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.069997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.070003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.072881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.082029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.082461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.082665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.082675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.082682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.082862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.083043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.083051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.083057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.085926] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.095236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.095665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.095920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.095931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.095938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.096116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.096296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.096304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.096310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.953 [2024-05-15 03:18:33.099174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.953 [2024-05-15 03:18:33.108313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.953 [2024-05-15 03:18:33.108772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.109011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.953 [2024-05-15 03:18:33.109021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:01.953 [2024-05-15 03:18:33.109028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:01.953 [2024-05-15 03:18:33.109208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:01.953 [2024-05-15 03:18:33.109387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.953 [2024-05-15 03:18:33.109395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.953 [2024-05-15 03:18:33.109402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.213 [2024-05-15 03:18:33.112277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.213 [2024-05-15 03:18:33.121423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.213 [2024-05-15 03:18:33.121882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.213 [2024-05-15 03:18:33.122108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.213 [2024-05-15 03:18:33.122118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.122125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.122304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.122488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.122497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.122503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.125365] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.134501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.134957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.135125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.135135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.135145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.135324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.135508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.135517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.135523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.138389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.147699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.148128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.148302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.148312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.148319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.148503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.148683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.148691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.148698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.151567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.160886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.161237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.161477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.161487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.161494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.161673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.161853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.161861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.161867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.164767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.174068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.174505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.174750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.174761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.174768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.174952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.175133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.175142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.175148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.178010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.187143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.187797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.187808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.187814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.187994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.188175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.188183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.188189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.191064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.200371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.200778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.201025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.201036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.201043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.201223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.201402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.201411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.201417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.204287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.213595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.214025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.214279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.214288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.214296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.214480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.214664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.214673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.214679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.217552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.226693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.227123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.227373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.227384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.227391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.227576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.227756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.227765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.227772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.230640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.214 [2024-05-15 03:18:33.239792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.214 [2024-05-15 03:18:33.240236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.240458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.214 [2024-05-15 03:18:33.240480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.214 [2024-05-15 03:18:33.240492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.214 [2024-05-15 03:18:33.240675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.214 [2024-05-15 03:18:33.240857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.214 [2024-05-15 03:18:33.240865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.214 [2024-05-15 03:18:33.240871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.214 [2024-05-15 03:18:33.243748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.252907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.253340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.253605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.253616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.253623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.253804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.253983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.253997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.254004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.256874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.266017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.266426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.266589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.266600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.266607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.266787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.266967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.266975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.266981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.269854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.279190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.279644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.279869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.279879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.279886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.280067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.280247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.280255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.280261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.283134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.292278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.292902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.293071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.293082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.293089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.293269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.293449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.293457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.293473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.296339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.305480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.305834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.306061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.306071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.306078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.306259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.306438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.306446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.306452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.309318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.318623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.319058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.319280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.319291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.319300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.319484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.319665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.319674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.319680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.322545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.331859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.332233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.332481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.332493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.332500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.332679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.332859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.332868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.332875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.335746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.345065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.345518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.345741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.345752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.345758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.345938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.346119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.346130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.346137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.349011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.358156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.358580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.358826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.358836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.358843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.215 [2024-05-15 03:18:33.359023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.215 [2024-05-15 03:18:33.359202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.215 [2024-05-15 03:18:33.359210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.215 [2024-05-15 03:18:33.359217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.215 [2024-05-15 03:18:33.362087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.215 [2024-05-15 03:18:33.371389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.215 [2024-05-15 03:18:33.371811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.372078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.215 [2024-05-15 03:18:33.372088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.215 [2024-05-15 03:18:33.372095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.216 [2024-05-15 03:18:33.372275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.216 [2024-05-15 03:18:33.372455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.216 [2024-05-15 03:18:33.372463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.216 [2024-05-15 03:18:33.372474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.375343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 [2024-05-15 03:18:33.384496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.384946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.474 [2024-05-15 03:18:33.385216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.385227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.385234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.385413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:24:02.474 [2024-05-15 03:18:33.385598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.385607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.385613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.474 [2024-05-15 03:18:33.388488] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 [2024-05-15 03:18:33.397635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.398021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.398288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.398299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.398306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.398490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 [2024-05-15 03:18:33.398672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.398680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.398686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.401559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 [2024-05-15 03:18:33.410872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.411248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.411433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.411444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.411450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.411636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 [2024-05-15 03:18:33.411817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.411826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.411835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.414705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.474 [2024-05-15 03:18:33.424017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.424299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.474 [2024-05-15 03:18:33.424402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.424632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.424644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.424652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.424833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 [2024-05-15 03:18:33.425012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.425021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.425027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.427901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.474 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.474 [2024-05-15 03:18:33.437216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.437622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.437800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.437810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.437817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.437996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 [2024-05-15 03:18:33.438176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.438185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.438191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.441059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 [2024-05-15 03:18:33.450366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.450729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.450905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.450919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.450926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.451105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.474 [2024-05-15 03:18:33.451284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.474 [2024-05-15 03:18:33.451293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.474 [2024-05-15 03:18:33.451298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.474 [2024-05-15 03:18:33.454172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.474 [2024-05-15 03:18:33.463526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.474 [2024-05-15 03:18:33.463972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.464157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.474 [2024-05-15 03:18:33.464167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.474 [2024-05-15 03:18:33.464175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.474 [2024-05-15 03:18:33.464356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.475 [2024-05-15 03:18:33.464541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.475 [2024-05-15 03:18:33.464549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.475 [2024-05-15 03:18:33.464556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.475 [2024-05-15 03:18:33.467424] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.475 Malloc0 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.475 [2024-05-15 03:18:33.476745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.475 [2024-05-15 03:18:33.477207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.475 [2024-05-15 03:18:33.477382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.475 [2024-05-15 03:18:33.477392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.475 [2024-05-15 03:18:33.477399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.475 [2024-05-15 03:18:33.477582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.475 [2024-05-15 03:18:33.477762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.475 [2024-05-15 03:18:33.477770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.475 [2024-05-15 03:18:33.477777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.475 [2024-05-15 03:18:33.480650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.475 [2024-05-15 03:18:33.489961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.475 [2024-05-15 03:18:33.490310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.475 [2024-05-15 03:18:33.490473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.475 [2024-05-15 03:18:33.490484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148e840 with addr=10.0.0.2, port=4420 00:24:02.475 [2024-05-15 03:18:33.490491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148e840 is same with the state(5) to be set 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.475 [2024-05-15 03:18:33.490670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e840 (9): Bad file descriptor 00:24:02.475 [2024-05-15 03:18:33.490850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.475 [2024-05-15 03:18:33.490859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.475 [2024-05-15 03:18:33.490866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.475 [2024-05-15 03:18:33.493242] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:02.475 [2024-05-15 03:18:33.493451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.475 [2024-05-15 03:18:33.493732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.475 03:18:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1166639 00:24:02.475 [2024-05-15 03:18:33.503201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.475 [2024-05-15 03:18:33.535745] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.450 00:24:12.450 Latency(us) 00:24:12.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.450 Verification LBA range: start 0x0 length 0x4000 00:24:12.450 Nvme1n1 : 15.05 7917.96 30.93 12135.18 0.00 6346.67 605.50 44678.46 00:24:12.450 =================================================================================================================== 00:24:12.450 Total : 7917.96 30.93 12135.18 0.00 6346.67 605.50 44678.46 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.450 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.451 rmmod nvme_tcp 00:24:12.451 rmmod nvme_fabrics 00:24:12.451 rmmod nvme_keyring 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1167568 ']' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1167568 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1167568 ']' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1167568 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1167568 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1167568' 00:24:12.451 killing process with pid 1167568 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1167568 00:24:12.451 [2024-05-15 03:18:42.343906] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1167568 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.451 03:18:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.829 03:18:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.829 00:24:13.829 real 0m26.123s 00:24:13.829 user 1m3.595s 00:24:13.829 sys 0m6.036s 00:24:13.830 03:18:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:13.830 03:18:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:13.830 ************************************ 00:24:13.830 END TEST nvmf_bdevperf 00:24:13.830 ************************************ 00:24:13.830 03:18:44 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:13.830 03:18:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:13.830 03:18:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:13.830 03:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.830 ************************************ 00:24:13.830 START TEST nvmf_target_disconnect 00:24:13.830 ************************************ 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:13.830 * Looking for test storage... 00:24:13.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.830 03:18:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.153 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.154 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.154 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:19.154 00:24:19.154 --- 10.0.0.2 ping statistics --- 00:24:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.154 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:19.154 00:24:19.154 --- 10.0.0.1 ping statistics --- 00:24:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.154 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.154 03:18:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 ************************************ 00:24:19.154 START TEST nvmf_target_disconnect_tc1 00:24:19.154 ************************************ 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.154 [2024-05-15 03:18:50.140641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.154 [2024-05-15 03:18:50.141019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.154 [2024-05-15 03:18:50.141055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3bae0 with addr=10.0.0.2, port=4420 00:24:19.154 [2024-05-15 03:18:50.141104] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:19.154 [2024-05-15 03:18:50.141136] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:19.154 [2024-05-15 03:18:50.141155] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:19.154 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:19.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:19.154 Initializing NVMe Controllers 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:24:19.154 00:24:19.154 real 0m0.093s 00:24:19.154 user 0m0.036s 00:24:19.154 sys 0m0.055s 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 ************************************ 00:24:19.154 END TEST nvmf_target_disconnect_tc1 00:24:19.154 ************************************ 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 ************************************ 00:24:19.154 START TEST nvmf_target_disconnect_tc2 00:24:19.154 ************************************ 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1172720 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1172720 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1172720 ']' 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.154 03:18:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 [2024-05-15 03:18:50.283601] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:19.154 [2024-05-15 03:18:50.283640] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.412 [2024-05-15 03:18:50.353957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.412 [2024-05-15 03:18:50.428545] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.412 [2024-05-15 03:18:50.428579] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.412 [2024-05-15 03:18:50.428586] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.412 [2024-05-15 03:18:50.428592] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.412 [2024-05-15 03:18:50.428597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.412 [2024-05-15 03:18:50.428675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:19.412 [2024-05-15 03:18:50.428785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:19.412 [2024-05-15 03:18:50.428889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.412 [2024-05-15 03:18:50.428890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.979 Malloc0 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.979 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.979 [2024-05-15 03:18:51.139306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 [2024-05-15 03:18:51.168240] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:20.238 [2024-05-15 03:18:51.168459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=1172796 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:24:20.238 03:18:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.238 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.148 03:18:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 1172720 00:24:22.148 03:18:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 [2024-05-15 03:18:53.195974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 [2024-05-15 03:18:53.196175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Read completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 Write completed with error (sct=0, sc=8) 00:24:22.148 starting I/O failed 00:24:22.148 [2024-05-15 03:18:53.196369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.149 [2024-05-15 03:18:53.196658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.196843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.196858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.196980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.197152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.197190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.197402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.197670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.197702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.197904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.198071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.198085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.198372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.198633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.198663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.198805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.199022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.199051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.199315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.199577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.199608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.199870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.200146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.200175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.200389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.200609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.200639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.200931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.201146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.201174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.201304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.201615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.201645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.201845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.202049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.202078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.202274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.202582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.202612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.202868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.203008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.203021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.203229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.203503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.203533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.203770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.203974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.204003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.204144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.204391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.204420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.204740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.204993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.205021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.205301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.205458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.205476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.205660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.205932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.205959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.206231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.206531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.206561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.206871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.207054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.207083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.207372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.207588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.207603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.207872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.207989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.208002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.208252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.208526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.208557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.208703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.208961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.208989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.209203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.209479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.209493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.209682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.209930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.209944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.149 qpair failed and we were unable to recover it. 00:24:22.149 [2024-05-15 03:18:53.210099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.149 [2024-05-15 03:18:53.210382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.210411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.210660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.210866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.210894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.211183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.211436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.211487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.211784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.212311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.212613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.212866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.213120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.213333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.213361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.213565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.213838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.213866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.214092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.214340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.214368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.214603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.214855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.214884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.215112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.215336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.215366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.215668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.215901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.215931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.216124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.216319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.216348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.216652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.216959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.216988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.217300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.217601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.217637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.217932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.218230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.218259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.218531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.218670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.218700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.218982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.219230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.219258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.219543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.219800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.219829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.220106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.220305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.220334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.220481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.220703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.220716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.220830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.221077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.221090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.221366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.221680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.221711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.222001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.150 [2024-05-15 03:18:53.222263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.150 qpair failed and we were unable to recover it. 00:24:22.150 [2024-05-15 03:18:53.222521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.222780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.222809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.222961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.223160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.223189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.223398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.223681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.223712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.223920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.224123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.224151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.224438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.224656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.224686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.224897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.225272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.225743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.225913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.226123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.226326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.226339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.226564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.226794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.226823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.227025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.227325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.227354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.227587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.227818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.227847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.228055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.228192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.228221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.228421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.228546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.228560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.228798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.229365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.229723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.229984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.230265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.230461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.230501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.230783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.231061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.231089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.231355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.231644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.231659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.231900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.232071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.232084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.232349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.232652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.232682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.232993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.233190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.233219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.233420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.233707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.233738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.234011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.234278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.234292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.234471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.234626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.234639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.234904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.235185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.235214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.235367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.235622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.235636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.151 qpair failed and we were unable to recover it. 00:24:22.151 [2024-05-15 03:18:53.235832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.236076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.151 [2024-05-15 03:18:53.236105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.236246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.236450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.236496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.236749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.236947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.236976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.237176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.237381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.237394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.237592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.237818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.237848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.238074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.238355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.238384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.238607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.238815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.238844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.239047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.239322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.239351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.239609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.239887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.239916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.240194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.240376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.240390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.240566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.240724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.240738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.240980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.241474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.241760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.241996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.242183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.242353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.242381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.242661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.242935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.242964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.243171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.243446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.243485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.243691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.243892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.243920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.244125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.244378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.244406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.244613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.244903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.244944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.245113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.245369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.245398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.245654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.245913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.245943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.246220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.246448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.152 [2024-05-15 03:18:53.246461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.152 qpair failed and we were unable to recover it. 00:24:22.152 [2024-05-15 03:18:53.246716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.246824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.246838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.247097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.247419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.247448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.247666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.247877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.248167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.248383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.248396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.248564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.248733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.248775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.249009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.249220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.249233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.249393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.249589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.249618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.249905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.250155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.250184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.250406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.250674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.250688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.250868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.251047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.251075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.251281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.251550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.251580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.251841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.252121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.252149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.252428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.252687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.252701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.252937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.253157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.253170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.253415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.253591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.253622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.253892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.254170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.254197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.254422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.254678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.254708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.254913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.255188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.255217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.255473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.255699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.255713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.255878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.256118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.256147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.256353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.256569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.256599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.256825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.257054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.257083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.257339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.257537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.257567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.153 [2024-05-15 03:18:53.257846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.258126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.153 [2024-05-15 03:18:53.258154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.153 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.258416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.258586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.258600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.258851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.259385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.259761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.259998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.260245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.260447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.260486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.260766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.261027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.261056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.261340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.261536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.261567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.261828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.262035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.262065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.262347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.262546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.262560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.262804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.263030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.263058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.263325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.263601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.263630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.263917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.264189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.264202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.264429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.264649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.264663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.264890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.265185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.265214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.265487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.265756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.265785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.266070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.266223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.266252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.266504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.266654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.266668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.266874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.267181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.267216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.267418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.267677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.267708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.267967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.268188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.268217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.268479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.268683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.268711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.268926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.269228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.269257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.154 qpair failed and we were unable to recover it. 00:24:22.154 [2024-05-15 03:18:53.269512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.154 [2024-05-15 03:18:53.269741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.269770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.269967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.270154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.270183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.270383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.270597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.270611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.270784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.271079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.271121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.271367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.271586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.271600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.271794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.272020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.272049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.272323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.272586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.272617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.272875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.273088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.273102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.273257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.273519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.273550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.273757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.274041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.274070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.274327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.274604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.274634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.274865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.275013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.275042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.275366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.275604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.275618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.275816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.276058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.276071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.276272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.276498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.276528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.276795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.277066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.277095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.277385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.277662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.277693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.277970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.278235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.278263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.278475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.278694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.278723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.279002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.279205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.279218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.279468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.279631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.279644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.279904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.280073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.280086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.280340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.280589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.280619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.280829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.281130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.281159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.281475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.281776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.281805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.282112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.282410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.282439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.282764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.283074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.283103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.283408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.283681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.283711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.283856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.284055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.284084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.155 qpair failed and we were unable to recover it. 00:24:22.155 [2024-05-15 03:18:53.284319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.155 [2024-05-15 03:18:53.284573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.284603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.284887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.285165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.285194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.285456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.285708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.285722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.285894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.286145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.286174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.286404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.286615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.286645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.286927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.287437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.287802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.287995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.288202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.288504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.288518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.288676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.288929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.288943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.289194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.289423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.289452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.289680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.289938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.289967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.290271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.290573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.290603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.290858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.291122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.291150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.291421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.291621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.291651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.291954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.292095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.292124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.292431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.292645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.292675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.292980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.293178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.293206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.293483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.293689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.293702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.293932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.294177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.294205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.294488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.294627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.294641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.294870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.295080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.295109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.295312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.295611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.295642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.295919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.296120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.296149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.296362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.296589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.296619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.296922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.297228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.297258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.297562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.297868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.297897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.298106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.298410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.298440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.298749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.298960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.298989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.156 qpair failed and we were unable to recover it. 00:24:22.156 [2024-05-15 03:18:53.299261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.156 [2024-05-15 03:18:53.299417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.299446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.299667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.299861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.299876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.300106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.300319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.300348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.300553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.300837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.300850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.301125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.301344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.301357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.301525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.301766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.301779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.302001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.302225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.302250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.302474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.302650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.302664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.302789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.303021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.303050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.303254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.303531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.157 [2024-05-15 03:18:53.303568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.157 qpair failed and we were unable to recover it. 00:24:22.157 [2024-05-15 03:18:53.303727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.428 [2024-05-15 03:18:53.303974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.428 [2024-05-15 03:18:53.303987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.428 qpair failed and we were unable to recover it. 00:24:22.428 [2024-05-15 03:18:53.304174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.428 [2024-05-15 03:18:53.304432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.428 [2024-05-15 03:18:53.304445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.428 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.304705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.304962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.304975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.305217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.305461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.305485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.305711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.305989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.306018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.306301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.306585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.306598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.306868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.307027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.307040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.307303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.307556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.307586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.307846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.308126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.308156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.308438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.308720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.308735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.308913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.309159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.309172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.309423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.309583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.309614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.309819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.310102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.310132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.310417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.310572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.310587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.310856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.311318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.311793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.311992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.312253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.312444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.312491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.312670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.312854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.312883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.313141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.313409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.313443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.313663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.313970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.313998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.314268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.314426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.314455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.314707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.314874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.314888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.315110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.315281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.315310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.315589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.315899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.315930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.316225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.316447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.316485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.316677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.316787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.316801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.317077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.317327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.317341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.317621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.317846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.317859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.318062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.318314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.318343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.318559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.318717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.318746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.429 qpair failed and we were unable to recover it. 00:24:22.429 [2024-05-15 03:18:53.318886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.429 [2024-05-15 03:18:53.319187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.319215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.319695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.319724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.319952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.320250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.320280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.320562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.320798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.320812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.321012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.321196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.321210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.321378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.321600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.321614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.321852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.322141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.322173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.322385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.322667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.322698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.322959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.323240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.323269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.323543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.323786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.323799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.323958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.324222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.324236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.324342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.324519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.324533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.324782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.325049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.325077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.325277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.325562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.325593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.325802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.326077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.326106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.326297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.326449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.326551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.326827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.327082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.327096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.327348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.327596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.327610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.327864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.328226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.328603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.328871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.329097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.329291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.329319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.329513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.329813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.329842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.330058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.330267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.330296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.330583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.330791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.330820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.331044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.331272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.331302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.331512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.331725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.331755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.331957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.332243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.332272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.332551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.332730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.332743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.332923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.333158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.430 [2024-05-15 03:18:53.333187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.430 qpair failed and we were unable to recover it. 00:24:22.430 [2024-05-15 03:18:53.333396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.333630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.333661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.333924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.334183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.334212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.334350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.334582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.334612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.334855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.335120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.335149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.335400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.335649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.335664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.335894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.336185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.336215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.336439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.336731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.336761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.336973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.337256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.337285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.337511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.337714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.337743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.338028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.338312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.338347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.338558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.338814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.338828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.339076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.339338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.339351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.339614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.339857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.340074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.340368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.340405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.340650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.340892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.340905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.341080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.341334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.341363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.341598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.341904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.341934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.342244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.342554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.342585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.342893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.343226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.343737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.343933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.344187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.344415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.344429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.344709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.344965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.344995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.345137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.345396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.345425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.345616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.345849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.345877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.346106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.346314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.346344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.346616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.346844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.346858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.347030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.347359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.347744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.431 [2024-05-15 03:18:53.347967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.431 qpair failed and we were unable to recover it. 00:24:22.431 [2024-05-15 03:18:53.348185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.348475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.348489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.348653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.348812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.348826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.349017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.349212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.349226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.349482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.349658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.349672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.349928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.350164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.350177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.350379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.350636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.350667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.350895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.351151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.351180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.351333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.351642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.351673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.351834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.352113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.352142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.352425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.352696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.352726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.352991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.353244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.353274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.353559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.353801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.353814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.354046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.354163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.354177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.354429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.354744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.354774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.355020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.355345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.355374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.355574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.355751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.355781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.355979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.356264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.356294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.356487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.356730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.356758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.356980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.357171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.357200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.357407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.357630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.357660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.357858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.358133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.358168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.358309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.358592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.358623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.358899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.359157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.359171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.359421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.359677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.359692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.359943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.360327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.360717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.360903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.361028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.361261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.361289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.361531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.361660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.361701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.361961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.362218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.432 [2024-05-15 03:18:53.362251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.432 qpair failed and we were unable to recover it. 00:24:22.432 [2024-05-15 03:18:53.362483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.362676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.362706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.362986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.363194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.363223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.363429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.363639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.363669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.363864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.364141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.364170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.364370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.364582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.364613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.364883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.365159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.365172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.365420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.365664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.365678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.365936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.366121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.366135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.366318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.366512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.366542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.366799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.366992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.367022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.367291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.367511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.367543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.367822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.368097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.368110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.368323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.368614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.368629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.368833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.369011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.369024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.369261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.369545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.369575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.369796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.370060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.370090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.370372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.370637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.370668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.370902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.371133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.371147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.371408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.371667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.371682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.371794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.372269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.372637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.372948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.373144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.373414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.373450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.433 [2024-05-15 03:18:53.373634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.373899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.433 [2024-05-15 03:18:53.373927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.433 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.374149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.374341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.374370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.374586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.374846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.374875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.375034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.375295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.375324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.375621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.375870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.375884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.376136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.376369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.376383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.376584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.376845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.376875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.377138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.377417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.377446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.377621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.377939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.377987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.378276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.378538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.378569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.378836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.379046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.379076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.379342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.379632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.379663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.379878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.380136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.380166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.380483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.380741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.380771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.381080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.381272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.381301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.381536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.381729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.381758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.382052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.382317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.382346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.382646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.382808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.382822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.382927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.383220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.383489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.383679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.383693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.383895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.384295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.384697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.384919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.385215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.385501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.385531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.385754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.386235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.386758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.386994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.387277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.387553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.387568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.387826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.387931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.387945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.388192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.388506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.388537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.434 [2024-05-15 03:18:53.388769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.388951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.434 [2024-05-15 03:18:53.388979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.434 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.389284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.389563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.389594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.389870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.390093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.390122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.390338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.390620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.390651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.390934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.391147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.391175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.391463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.391635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.391649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.391832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.392068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.392098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.392377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.392707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.392739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.392982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.393270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.393300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.393591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.393902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.393932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.394209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.394490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.394521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.394752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.394971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.395001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.395198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.395434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.395448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.395627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.395897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.395925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.396226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.396521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.396552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.396755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.396952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.396981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.397272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.397560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.397591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.397816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.398101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.398131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.398334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.398609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.398624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.398802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.399013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.399027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.399288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.399597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.399628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.399828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.400065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.400095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.400385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.400669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.400702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.400929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.401206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.401235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.401537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.401849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.401879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.402032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.402320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.402349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.402686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.402975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.403005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.403295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.403528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.403543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.403709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.403914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.403943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.404239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.404555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.404571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.435 [2024-05-15 03:18:53.404838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.405131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.435 [2024-05-15 03:18:53.405160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.435 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.405374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.405590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.405621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.405895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.406021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.406035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.406301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.406569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.406584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.406872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.407310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.407750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.407998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.408142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.408432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.408461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.408757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.409099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.409128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.409338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.409602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.409639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.409930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.410178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.410192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.410395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.410657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.410672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.410859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.411094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.411123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.411339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.411632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.411663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.411938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.412153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.412182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.412452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.412744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.412759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.413014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.413195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.413209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.413478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.413740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.413754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.414010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.414259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.414273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.414526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.414766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.414780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.414962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.415132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.415147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.415425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.415644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.415674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.415970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.416118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.416147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.416443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.416751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.416782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.416987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.417258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.417288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.417577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.417858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.417888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.418103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.418395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.418424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.418641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.418930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.418960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.419253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.419543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.419592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.419898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.420026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.420055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.436 [2024-05-15 03:18:53.420353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.420587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.436 [2024-05-15 03:18:53.420617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.436 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.420912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.421128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.421157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.421429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.421716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.421730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.421909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.422148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.422161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.422428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.422714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.422744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.423029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.423315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.423344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.423582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.423791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.423821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.424028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.424289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.424319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.424616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.424941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.424970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.425188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.425405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.425434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.425668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.425972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.426002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.426274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.426550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.426581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.426876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.427152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.427181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.427487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.427804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.427840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.428020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.428281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.428310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.428554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.428847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.428876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.429194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.429509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.429540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.429715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.429997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.430011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.430245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.430506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.430520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.430755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.430932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.430946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.431075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.431347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.431381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.431607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.431899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.431929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.432222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.432511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.432543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.432879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.433091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.433120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.433341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.433562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.433595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.437 qpair failed and we were unable to recover it. 00:24:22.437 [2024-05-15 03:18:53.433831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.434103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.437 [2024-05-15 03:18:53.434132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.434345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.434573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.434605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.434852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.435091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.435120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.435363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.435563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.435594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.435888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.436196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.436225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.436495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.436807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.436837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.437149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.437386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.437399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.437635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.437828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.437857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.438181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.438495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.438526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.438747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.439042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.439071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.439309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.439574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.439605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.439899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.440105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.440120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.440305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.440586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.440617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.440903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.441235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.441264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.441513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.441727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.441756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.442025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.442284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.442298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.442554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.442743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.442757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.443017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.443253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.443267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.443461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.443631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.443646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.443936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.444231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.444260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.444538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.444808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.444837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.445051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.445269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.445298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.445598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.445825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.445854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.446066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.446306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.446335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.446637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.446946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.446976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.447248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.447558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.447589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.447903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.448132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.448162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.448456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.448617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.448632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.448841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.449096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.449125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.449388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.449595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.449610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.438 qpair failed and we were unable to recover it. 00:24:22.438 [2024-05-15 03:18:53.449799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.449997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.438 [2024-05-15 03:18:53.450025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.450261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.450409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.450437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.450653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.450836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.450865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.451108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.451256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.451285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.451584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.451837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.451850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.452037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.452274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.452303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.452609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.452821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.452834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.453051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.453291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.453305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.453495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.453731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.453745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.453937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.454183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.454213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.454426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.454651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.454665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.454935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.455167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.455181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.455450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.455645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.455660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.455788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.456209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.456570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.456818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.457028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.457315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.457700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.457928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.458155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.458443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.458482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.458693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.458986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.459014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.459326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.459527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.459558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.459824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.460074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.460089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.460263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.460441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.460476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.460748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.460984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.461013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.461276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.461566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.461598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.461818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.462051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.462080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.462295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.462606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.462637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.462838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.463118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.463132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.463394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.463664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.463679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.439 qpair failed and we were unable to recover it. 00:24:22.439 [2024-05-15 03:18:53.463912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.464170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.439 [2024-05-15 03:18:53.464184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.464451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.464705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.464720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.464899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.465106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.465120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.465385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.465656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.465687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.465854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.466066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.466096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.466364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.466635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.466666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.466958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.467397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.467622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.467723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.467976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.468168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.468197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.468509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.468821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.468835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.469021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.469211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.469240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.469489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.469703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.469732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.470036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.470301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.470315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.470431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.470689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.470704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.470972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.471238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.471267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.471502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.471778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.471792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.471967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.472157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.472192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.472484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.472679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.472708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.472995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.473261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.473275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.473413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.473648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.473664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.473869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.474269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.474759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.474985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.475197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.475415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.475444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.475673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.475818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.475847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.476013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.476494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.476750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.476947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.477169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.477289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.477319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.440 qpair failed and we were unable to recover it. 00:24:22.440 [2024-05-15 03:18:53.477612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.477807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.440 [2024-05-15 03:18:53.477821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.478018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.478151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.478181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.478452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.478602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.478633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.478775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.478988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.479002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.479283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.479480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.479495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.479597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.479701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.479715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.479896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.480264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.480598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.480779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.480969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.481277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.481563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.481821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.481943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.482134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.482338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.482696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.482906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.483016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.483390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.483681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.483879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.483979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.484266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.484680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.484869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.485037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.485153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.485166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.485404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.485609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.485623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.485831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.486270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.486660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.486837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.487098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.487481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.487735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.487918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.441 qpair failed and we were unable to recover it. 00:24:22.441 [2024-05-15 03:18:53.488095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.488331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.441 [2024-05-15 03:18:53.488346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.488516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.488707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.488722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.488902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.489212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.489532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.489867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.489991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.490119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.490395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.490611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.490740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.490875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.491235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.491559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.491679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.491780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.492192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.492551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.492767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.492960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.493144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.493325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.493338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.493535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.493711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.493725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.493828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.494196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.494565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.494674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.494863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.495255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.495685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.495791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.495936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.496051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.496065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.442 qpair failed and we were unable to recover it. 00:24:22.442 [2024-05-15 03:18:53.496167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.496331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.442 [2024-05-15 03:18:53.496345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.496462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.496572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.496587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.496688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.496786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.496800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.496918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.497151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.497361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.497669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.497850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.497938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.498290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.498661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.498771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.498898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.499323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.499725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.499957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.500176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.500373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.500401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.500620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.500754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.500782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.501071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.501382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.501605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.501714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.501822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.502251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.502535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.502784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.502949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.503372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.503718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.503900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.504013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.504298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.504605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.504891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.505143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.505348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.505377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.505575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.505800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.505829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.506131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.506366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.506395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.443 qpair failed and we were unable to recover it. 00:24:22.443 [2024-05-15 03:18:53.506628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.506758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.443 [2024-05-15 03:18:53.506772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.507013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.507234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.507263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.507483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.507692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.507721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.507941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.508121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.508135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.508393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.508631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.508662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.508904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.509184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.509213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.509538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.509833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.509863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.510055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.510337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.510366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.510661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.510817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.510856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.511111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.511237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.511251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.511425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.511588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.511630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.511850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.512155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.512184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.512381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.512574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.512604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.512892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.513054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.513092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.513380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.513666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.513702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.514029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.514234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.514264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.514483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.514718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.514747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.514943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.515179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.515208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.515492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.515721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.515750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.515950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.516253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.516283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.516552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.516848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.516877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.517195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.517456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.517509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.517724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.518042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.518072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.518267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.518554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.518585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.518876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.519152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.519166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.519372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.519638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.519669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.519886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.520096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.520125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.520340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.520568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.520598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.520732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.520987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.521001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.521179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.521422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.521452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.444 qpair failed and we were unable to recover it. 00:24:22.444 [2024-05-15 03:18:53.521775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.521987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.444 [2024-05-15 03:18:53.522017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.522289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.522573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.522604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.522743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.523003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.523033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.523340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.523603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.523647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.523907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.524169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.524198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.524483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.524748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.524777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.525063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.525326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.525355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.525581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.525777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.525807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.526117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.526410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.526439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.526668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.526970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.527000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.527234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.527508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.527539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.527817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.528086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.528115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.528398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.528684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.528715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.528996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.529251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.529264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.529508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.529683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.529696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.529908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.530198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.530227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.530506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.530715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.530744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.530958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.531179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.531193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.531449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.531744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.531776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.532062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.532346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.532375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.532612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.532825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.532854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.533130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.533324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.533353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.533644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.533873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.533887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.534151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.534259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.534272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.534436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.534632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.534646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.534780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.535023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.535058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.535266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.535558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.535589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.535733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.536341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.536745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.536990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.445 qpair failed and we were unable to recover it. 00:24:22.445 [2024-05-15 03:18:53.537270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.445 [2024-05-15 03:18:53.537601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.537631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.537930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.538243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.538271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.538577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.538845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.538874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.539142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.539415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.539445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.539786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.540021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.540049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.540284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.540489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.540530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.540816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.541055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.541084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.541355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.541629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.541660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.541858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.542049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.542077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.542320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.542540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.542570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.542889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.543110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.543139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.543361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.543606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.543637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.543909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.544092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.544106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.544368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.544632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.544648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.544861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.544997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.545011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.545223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.545494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.545509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.545700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.545968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.545998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.546201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.546418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.546447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.546730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.547014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.547043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.547242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.547454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.547493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.547787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.548053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.548082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.548308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.548600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.548631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.548963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.549180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.549208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.549510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.549719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.549748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.550041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.550351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.550365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.446 qpair failed and we were unable to recover it. 00:24:22.446 [2024-05-15 03:18:53.550534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.550812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.446 [2024-05-15 03:18:53.550826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.551046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.551327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.551341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.551628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.551880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.551894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.552160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.552365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.552378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.552573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.552820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.552850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.553170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.553481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.553510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.553782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.554049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.554078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.554346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.554558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.554589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.554887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.555102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.555131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.555420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.555591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.555606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.555775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.556030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.556059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.556389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.556686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.556718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.556936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.557201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.557230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.557429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.557721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.557751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.557970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.558180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.558209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.558501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.558787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.558816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.558976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.559238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.559268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.559566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.559713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.559742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.560060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.560324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.560338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.560508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.560743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.560757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.561024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.561319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.561348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.561573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.561808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.561843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.562163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.562359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.562389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.562711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.562997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.563027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.563298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.563527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.563558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.563794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.564105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.564119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.564437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.564717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.564748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.565005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.565256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.565270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.565436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.565653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.565684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.565933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.566221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.566250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.447 qpair failed and we were unable to recover it. 00:24:22.447 [2024-05-15 03:18:53.566537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.566878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.447 [2024-05-15 03:18:53.566908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.567192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.567512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.567527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.567719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.567904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.567918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.568149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.568413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.568442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.568659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.568898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.568928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.569222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.569482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.569513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.569800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.569997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.570026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.570230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.570436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.570475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.570644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.570843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.570872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.571171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.571486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.571516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.571749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.571975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.571990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.572171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.572433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.572463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.572773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.573000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.573180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.573364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.573377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.573551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.573788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.573802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.573915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.574080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.574094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.574285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.574492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.574523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.574728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.575261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.575802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.575995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.576205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.576440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.576454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.576716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.576989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.577018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.577239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.577525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.448 [2024-05-15 03:18:53.577557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.448 qpair failed and we were unable to recover it. 00:24:22.448 [2024-05-15 03:18:53.577844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.578183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.578213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.578501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.578780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.578793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.578907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.579170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.579184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.579374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.579517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.579534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.579739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.580205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.580696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.580887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.581158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.581416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.581429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.581672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.581933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.581947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.582131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.582373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.582403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.582684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.582834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.582863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.583061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.583324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.583353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.583676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.583897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.583926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.584226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.584541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.584576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.584795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.585109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.585138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.585347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.585584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.585615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.585832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.586200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.586627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.586900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.587176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.587462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.587505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.587827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.588066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.588095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.588298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.588567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.588598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.588801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.589009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.589038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.589237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.589486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.589516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.589750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.590015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.590044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.590263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.590446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.590485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.590761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.591043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.591072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.591303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.591592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.591623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.591826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.592044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.719 [2024-05-15 03:18:53.592076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.719 qpair failed and we were unable to recover it. 00:24:22.719 [2024-05-15 03:18:53.592333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.592630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.592670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.592918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.593115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.593145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.593435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.593683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.593698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.593959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.594122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.594136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.594322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.594510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.594540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.594771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.595063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.595092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.595420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.595711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.595726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.595982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.596267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.596296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.596564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.596831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.596862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.597147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.597437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.597474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.597747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.598195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.598628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.598946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.599240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.599485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.599516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.599785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.600074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.600103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.600396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.600632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.600663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.600935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.601087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.601101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.601341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.601587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.601618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.601911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.602235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.602263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.602506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.602810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.602839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.603182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.603338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.603352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.603554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.603853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.603884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.604045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.604304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.604318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.604504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.604619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.604633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.604823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.605057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.605071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.605332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.605537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.605552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.605764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.606029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.606058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.606348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.606697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.606728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.607007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.607236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.720 [2024-05-15 03:18:53.607265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.720 qpair failed and we were unable to recover it. 00:24:22.720 [2024-05-15 03:18:53.607551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.607769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.607799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.608000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.608263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.608292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.608508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.608722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.608756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.609030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.609323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.609353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.609629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.609899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.609929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.610255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.610555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.610586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.610906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.611197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.611211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.611394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.611669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.611684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.611811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.612322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.612718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.612912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.613176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.613492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.613523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.613743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.613982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.614017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.614290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.614497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.614528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.614745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.614983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.615012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.615306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.615504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.615519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.615754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.615922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.615951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.616246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.616512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.616543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.616830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.617117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.617147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.617393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.617590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.617621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.617841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.618155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.618193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.618418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.618687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.618718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.618986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.619228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.619257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.619558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.619834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.619864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.620111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.620321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.620350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.620597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.620795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.620824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.620964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.621090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.621119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.621408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.621640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.621670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.621943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.622235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.622249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.721 qpair failed and we were unable to recover it. 00:24:22.721 [2024-05-15 03:18:53.622434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.622648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.721 [2024-05-15 03:18:53.622679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.622972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.623244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.623273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.623493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.623718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.623748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.623970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.624199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.624228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.624509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.624743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.624757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.624994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.625195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.625209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.625325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.625504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.625790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.626172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.626754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.626941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.627154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.627411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.627440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.627668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.627830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.627859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.628164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.628363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.628392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.628635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.628926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.628966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.629212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.629452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.629471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.629638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.629898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.629927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.630429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.630457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.630617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.630841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.630871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.631078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.631342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.631371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.631668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.631957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.631986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.632319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.632518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.632549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.632870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.633138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.633167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.633455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.633786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.633818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.634110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.634342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.634372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.634584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.634859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.634895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.635197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.635513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.635545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.635845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.636056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.636085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.636295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.636503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.636535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.636757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.636984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.637013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.637256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.637572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.637603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.722 qpair failed and we were unable to recover it. 00:24:22.722 [2024-05-15 03:18:53.637805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.722 [2024-05-15 03:18:53.638114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.638143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.638364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.638662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.638693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.639013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.639305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.639335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.639569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.639781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.639811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.639944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.640154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.640168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.640432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.640613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.640629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.640840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.641137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.641166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.641448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.641744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.641774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.642000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.642267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.642295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.642587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.642878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.642907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.643196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.643456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.643500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.643770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.643988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.644017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.644302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.644589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.644621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.644839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.645112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.645142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.645419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.645702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.645732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.646026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.646239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.646268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.646556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.646838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.646868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.647207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.647418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.647447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.647740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.648018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.648047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.648324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.648616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.648648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.648970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.649213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.649243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.649491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.649760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.649790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.650089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.650401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.650431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.650739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.650942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.723 qpair failed and we were unable to recover it. 00:24:22.723 [2024-05-15 03:18:53.651180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.723 [2024-05-15 03:18:53.651458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.651497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.651722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.651876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.651905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.652186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.652451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.652471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.652731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.652948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.652962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.653196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.653386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.653415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.653599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.653887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.653917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.654237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.654418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.654433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.654700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.654924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.654954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.655155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.655446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.655488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.655711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.655913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.655943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.656151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.656366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.656396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.656670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.656890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.656920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.657207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.657485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.657500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.657690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.657895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.657924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.658299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.658529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.658545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.658808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.658990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.659005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.659207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.659393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.659423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.659651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.659923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.659953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.660250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.660563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.660594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.660867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.661332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.661746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.661960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.662246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.662446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.662460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.662631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.662872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.662901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.663197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.663488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.663519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.663763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.663979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.664009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.664279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.664567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.664599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.664821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.665054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.665083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.665376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.665665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.665681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.665872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.666086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.666100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.724 qpair failed and we were unable to recover it. 00:24:22.724 [2024-05-15 03:18:53.666372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.724 [2024-05-15 03:18:53.666661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.666693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.666914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.667233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.667263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.667423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.667658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.667673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.667896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.668375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.668776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.668981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.669167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.669378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.669410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.669709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.670001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.670034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.670274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.670588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.670606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.670862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.671085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.671117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.671395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.671675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.671693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.671952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.672112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.672146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.672400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.672639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.672657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.672921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.673135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.673166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.673390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.673640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.673673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.673884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.674105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.674136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.674347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.674575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.674608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.674883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.675092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.675125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.675420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.675763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.675781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.676061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.676210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.676242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.676480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.676697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.676729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.677002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.677196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.677212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.677323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.677569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.677605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.677824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.678040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.678072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.678286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.678501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.678535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.678833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.679049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.725 [2024-05-15 03:18:53.679081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.725 qpair failed and we were unable to recover it. 00:24:22.725 [2024-05-15 03:18:53.679298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.679516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.679550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.679738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.679854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.679886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.680106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.680255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.680286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.680499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.680708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.680739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.680952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.681398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.681645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.681865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.682088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.682376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.682392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.682484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.682673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.682690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.682893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.683376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.683739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.683989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.684285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.684563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.684581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.684821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.684952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.684969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.685186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.685347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.685378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.685534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.685771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.685805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.686023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.686237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.686287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.686424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.686618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.686651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.686882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.687151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.687183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.687336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.687519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.687555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.687847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.688115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.688147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.688364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.688688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.688723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.688884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.689178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.689211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.689420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.689687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.689706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.689900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.690410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.726 qpair failed and we were unable to recover it. 00:24:22.726 [2024-05-15 03:18:53.690779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.726 [2024-05-15 03:18:53.690909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.691116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.691303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.691319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.691421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.691627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.691660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.691799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.692091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.692136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.692327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.692531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.692563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.692776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.692986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.693017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.693309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.693546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.693563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.693745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.693871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.693903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.694128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.694336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.694380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.694601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.694776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.694792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.694897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.695110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.695141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.695418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.695588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.695606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.695872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.696376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.696790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.696940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.697117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.697317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.697348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.697496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.697712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.697745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.697946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.698317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.698766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.698996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.699200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.699482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.699521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.699672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.699868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.699903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.700136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.700408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.700439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.700650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.700804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.700846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.700992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.701452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.701766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.701980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.702273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.702488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.702521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.702790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.703056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.703347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.703577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.703610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.727 qpair failed and we were unable to recover it. 00:24:22.727 [2024-05-15 03:18:53.703823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.727 [2024-05-15 03:18:53.704003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.704034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.704196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.704401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.704438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.704675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.704933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.704965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.705109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.705326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.705373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.705593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.705763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.705780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.705966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.706371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.706737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.706912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.707122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.707332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.707363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.707574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.707743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.707778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.707909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.708221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.708596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.708814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.708947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.709146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.709177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.709402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.709666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.709683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.709923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.710191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.710222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.710494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.710621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.710638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.710894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.711058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.711074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.728 qpair failed and we were unable to recover it. 00:24:22.728 [2024-05-15 03:18:53.711236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.728 [2024-05-15 03:18:53.711366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.711396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.711599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.711807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.711839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.712145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.712340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.712371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.712607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.712861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.713090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.713335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.713366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.713635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.713820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.713851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.714071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.714359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.714401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.714584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.714764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.714794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.715015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.715314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.715345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.715581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.715784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.715815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.716021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.716308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.716340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.716609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.716892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.716924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.717200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.717407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.717437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.717645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.717818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.717848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.718063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.718260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.718291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.718462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.718788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.718819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.719092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.719366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.719397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.719632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.719794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.719825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.720121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.720265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.720296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.720561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.720774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.720804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.720951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.721181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.721211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.721503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.721734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.721761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.721967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.722355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.722787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.722980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.723233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.723375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.723388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.723632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.723884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.723912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.724245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.724374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.724404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.724665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.724798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.724813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.729 [2024-05-15 03:18:53.725123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.725409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.729 [2024-05-15 03:18:53.725425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.729 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.725575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.725853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.725885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.726128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.726398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.726430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.726689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.726834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.726851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.727101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.727272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.727288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.727485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.727761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.727782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.727907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.728154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.728632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.728834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.729145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.729460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.729517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.729802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.730259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.730620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.730761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.730861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.731121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.731137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.731404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.731727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.731760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.732014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.732249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.732279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.732548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.732786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.732802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.732994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.733095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.733112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.733358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.733640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.733682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.733933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.734141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.734173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.734442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.734649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.734682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.734817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.735102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.735143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.735340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.735532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.735564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.730 [2024-05-15 03:18:53.735768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.736027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.730 [2024-05-15 03:18:53.736058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.730 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.736332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.736543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.736576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.736821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.737143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.737553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.737829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.738092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.738297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.738313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.738554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.738668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.738684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.738888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.739207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.739622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.739853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.740089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.740326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.740358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.740530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.740778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.740809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.741096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.741410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.741426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.741668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.741964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.741996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.742250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.742499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.742515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.742650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.742905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.742921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.743128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.743329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.743361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.743588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.743852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.743869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.744123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.744348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.744379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.744615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.744769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.744785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.744969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.745244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.745275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.745441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.745736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.745753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.745923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.746182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.746214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.746413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.746705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.746745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.746975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.747293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.747324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.747613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.747763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.747779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.748044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.748281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.748312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.748588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.748861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.748892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.749098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.749316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.749332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.749542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.749743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.749775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.731 qpair failed and we were unable to recover it. 00:24:22.731 [2024-05-15 03:18:53.749991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.731 [2024-05-15 03:18:53.750202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.750234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.750532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.750696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.750727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.751047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.751318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.751358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.751601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.751731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.751750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.751940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.752162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.752194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.752518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.752736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.752767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.753042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.753494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.753533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.753746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.753869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.753886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.754006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.754294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.754310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.754602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.754801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.754832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.755087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.755220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.755252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.755476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.755760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.755791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.756006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.756151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.756184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.756489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.756647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.756682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.756930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.757100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.757116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.757222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.757423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.757454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.757686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.758347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.758763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.758998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.759204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.759494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.759512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.759678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.759918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.759949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.760181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.760313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.760344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.760635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.760844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.760876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.761100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.761298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.761330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.761543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.761836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.732 [2024-05-15 03:18:53.761867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.732 qpair failed and we were unable to recover it. 00:24:22.732 [2024-05-15 03:18:53.762035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.762323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.762354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.762628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.762831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.762847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.763036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.763335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.763366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.763641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.763815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.763845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.764167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.764478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.764511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.764740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.765040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.765071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.765233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.765452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.765510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.765785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.765997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.766027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.766262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.766478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.766496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.766772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.767066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.767099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.767387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.767650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.767667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.767854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.768049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.768081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.768307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.768556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.768574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.768831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.769022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.769039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.769303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.769542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.769560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.769803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.770010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.770026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.770309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.770591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.770623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.770877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.771094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.771125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.771340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.771513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.771546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.771859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.772079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.772117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.772410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.772646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.772679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.772976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.773149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.773166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.733 [2024-05-15 03:18:53.773415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.773691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.733 [2024-05-15 03:18:53.773723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.733 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.773951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.774266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.774297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.774597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.774802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.774834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.775058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.775546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.775873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.775989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.776093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.776356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.776386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.776614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.776878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.776909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.777055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.777280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.777312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.777522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.777709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.777726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.777871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.778135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.778167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.778384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.778660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.778676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.778890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.779119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.779151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.779369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.779596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.779630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.779851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.780130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.780162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.780370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.780652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.780686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.780906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.781122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.781154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.781446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.781687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.781720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.782002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.782310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.782341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.782641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.782800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.782831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.783130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.783335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.783365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.783691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.783797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.783813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.784076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.784318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.784335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.784506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.784691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.784723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.785053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.785348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.785380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.785612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.785740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.785772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.786009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.786305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.786344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.786533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.786776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.786807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.787083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.787351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.787393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.734 qpair failed and we were unable to recover it. 00:24:22.734 [2024-05-15 03:18:53.787498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.787692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.734 [2024-05-15 03:18:53.787723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.787887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.788201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.788232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.788525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.788777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.788794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.788919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.789098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.789136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.789368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.789584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.789617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.789790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.790035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.790066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.790343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.790634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.790667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.790947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.791293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.791324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.791490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.791716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.791747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.792036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.792216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.792234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.792504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.792721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.792755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.792989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.793225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.793803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.793843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.794106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.794383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.794414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.794686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.794889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.794920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.795058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.795364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.795396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.795695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.796016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.796048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.796268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.796462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.796485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.796770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.796990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.797021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.797299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.797501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.797540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.797864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.798114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.798145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.798443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.798683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.798716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.798940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.799138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.799170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.799481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.799680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.799712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.800005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.800276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.800307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.800633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.800932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.800965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.801130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.801322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.801354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.801633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.801898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.801914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.802049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.802314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.802345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.802620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.802896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.735 [2024-05-15 03:18:53.802927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.735 qpair failed and we were unable to recover it. 00:24:22.735 [2024-05-15 03:18:53.803171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.803436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.803480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.803732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.804003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.804034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.804307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.804576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.804609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.804850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.805135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.805166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.805385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.805535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.805568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.805880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.806184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.806215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.806447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.806750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.806782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.807020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.807219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.807251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.807536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.807723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.807740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.807866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.808110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.808142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.808449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.808763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.808800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.809024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.809289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.809321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.809551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.809739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.809755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.809928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.810162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.810194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.810461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.810759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.810792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.811113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.811424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.811455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.811767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.811923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.811939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.812180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.812482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.812514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.812724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.812994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.813025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.813244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.813534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.813552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.813674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.813919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.813936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.814199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.814485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.814517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.814742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.815017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.815049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.815268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.815485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.815518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.815812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.816026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.816057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.816266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.816614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.816647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.816873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.817154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.817185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.817477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.817771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.817801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.818029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.818322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.818362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.818529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.818770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.736 [2024-05-15 03:18:53.818801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.736 qpair failed and we were unable to recover it. 00:24:22.736 [2024-05-15 03:18:53.819070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.819282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.819318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.819620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.819763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.819797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.820014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.820232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.820264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.820508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.820757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.820788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.821033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.821300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.821331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.821610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.821904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.821935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.822155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.822425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.822457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.822772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.822992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.823023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.823242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.823538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.823584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.823761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.824010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.824027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.824311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.824580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.824619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.824899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.825071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.825102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.825375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.825680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.825713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.826011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.826163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.826194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.826416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.826728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.826761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.827035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.827302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.827332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.827634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.827912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.827928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.828192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.828395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.828411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.828580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.828823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.828855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.829194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.829484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.829517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.829721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.829960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.829991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.830269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.830541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.830574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.830802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.831103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.831135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.831383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.831604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.831637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.831856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.832098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.832129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.832432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.832674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.737 [2024-05-15 03:18:53.832690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.737 qpair failed and we were unable to recover it. 00:24:22.737 [2024-05-15 03:18:53.832948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.833236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.833267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.833593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.833813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.833847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.834149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.834415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.834446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.834790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.835063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.835080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.835339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.835528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.835546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.835850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.836115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.836147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.836400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.836574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.836607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.836877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.837167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.837198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.837412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.837671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.837703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.837974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.838242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.838273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.838550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.838842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.838858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.839087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.839318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.839349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.839566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.839793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.839825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.840095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.840312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.840343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.840637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.840919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.840950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.841247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.841480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.841514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.841684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.841911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.841927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.842122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.842396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.842427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.842584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.842901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.842932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.843218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.843364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.843396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.843667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.843937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.843968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.844259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.844484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.844517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.844816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.845026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.845057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.845272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.845405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.845435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.845682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.845970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.846001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.846550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.846593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.846888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.847156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.847187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.847409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.847694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.738 [2024-05-15 03:18:53.847712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.738 qpair failed and we were unable to recover it. 00:24:22.738 [2024-05-15 03:18:53.847884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.848063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.848093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.848338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.848621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.848654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.848985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.849279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.849311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.849479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.849770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.849802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.849969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.850172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.850203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.850424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.850732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.850750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.851010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.851297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.851332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.851579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.851813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.851845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.852052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.852322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.852353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.852569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.852882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.852914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.853193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.853437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.853477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.853778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.854049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.854081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.854371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.854676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.854709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.854931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.855139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.855170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.855445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.855742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.855776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.856094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.856391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.856422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.856710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.856989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.857021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.857231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.857499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.857533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.857787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.858041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.858073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.858304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.858458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.858514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.858821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.859113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.859144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.859422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.859774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.859806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.860080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.860288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.860319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.860540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.860696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.860727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.860970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.861287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.861319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.861623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.861824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.861840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.862029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.862293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.862324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.862622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.862936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.862967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.863273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.863430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.863473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.739 qpair failed and we were unable to recover it. 00:24:22.739 [2024-05-15 03:18:53.863714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.863912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.739 [2024-05-15 03:18:53.863928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.864106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.864374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.864405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.864629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.864791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.864823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.864983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.865155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.865171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.865407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.865581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.865614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.865837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.866053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.866084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.866379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.866589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.866623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.866813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.867050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.867067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.867366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.867571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.867604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.867925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.868131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.868164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:22.740 [2024-05-15 03:18:53.868413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.868731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.740 [2024-05-15 03:18:53.868747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:22.740 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.868858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.869025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.869043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.869228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.869497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.869514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.870290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.870736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.870917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.871100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.871387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.871403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.871687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.871951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.871984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.872194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.872395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.872426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.872729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.872967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.872987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.873152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.873337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.873368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.873570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.873871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.873901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.011 qpair failed and we were unable to recover it. 00:24:23.011 [2024-05-15 03:18:53.874185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.011 [2024-05-15 03:18:53.874400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.874432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.874610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.874902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.874932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.875123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.875332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.875364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.875596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.875908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.875940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.876212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.876493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.876526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.876826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.877004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.877045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.877302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.877511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.877528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.877820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.878361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.878698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.878956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.879136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.879379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.879395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.879636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.879829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.879845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.880037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.880246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.880278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.880515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.880734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.880765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.880927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.881141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.881172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.881415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.881725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.881770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.882010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.882299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.882343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.882682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.882961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.882993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.883222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.883443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.883484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.883782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.884251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.884712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.884982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.885288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.885580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.885613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.885834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.886100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.886117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.886354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.886622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.886656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.886890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.887105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.887136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.887451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.887766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.887798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.888093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.888330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.888362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.888583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.888834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.888867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.889071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.889340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.012 [2024-05-15 03:18:53.889371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-05-15 03:18:53.889603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.889881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.889912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.890171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.890399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.890415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.890593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.890818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.890834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.891075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.891294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.891325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.891554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.891773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.891805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.892063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.892319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.892354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.892571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.892794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.892826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.893116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.893378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.893410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.893645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.893936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.893973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.894244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.894491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.894525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.894745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.895007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.895038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.895255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.895463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.895514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.895842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.896159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.896375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.896622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.896654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.896961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.897159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.897190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.897463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.897736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.897753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.897983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.898266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.898283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.898540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.898729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.898745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.899008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.899181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.899201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.899489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.899705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.899722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.899900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.900102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.900133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.900424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.900752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.900784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.901079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.901365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.901396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.901636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.901850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.901882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.902171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.902408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.902425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.902692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.902956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.902972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.903201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.903392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.903408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.903580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.903774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.903791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.903994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.904122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.904139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-05-15 03:18:53.904415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.904711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.013 [2024-05-15 03:18:53.904759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.905057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.905213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.905245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.905455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.905758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.905789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.906092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.906279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.906295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.906483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.906672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.906703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.906999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.907285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.907316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.907636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.907857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.907888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.908112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.908407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.908439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.908659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.908981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.909012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.909307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.909574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.909617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.909917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.910048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.910064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.910332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.910636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.910669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.910918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.911103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.911119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.911398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.911691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.911725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.911946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.912117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.912148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.912364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.912583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.912600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.912798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.913090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.913122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.913283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.913570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.913603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.913885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.914146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.914178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.914448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.914659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.914692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.914889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.915134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.915166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.915438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.915650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.915681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.915977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.916204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.916221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.916409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.916670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.916686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.916934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.917200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.917231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.917443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.917649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.917681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.917977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.918288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.918328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.918536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.918750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.918767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.919028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.919289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.919307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.919547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.919720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.919737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-05-15 03:18:53.920002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.920201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.014 [2024-05-15 03:18:53.920238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.920524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.920732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.920749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.921039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.921376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.921408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.921696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.921947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.921980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.922296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.922518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.922552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.922851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.923053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.923070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.923252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.923535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.923567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.923865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.924176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.924208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.924503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.924803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.924835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.925060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.925279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.925311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.925536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.925757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.925788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.926108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.926381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.926412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.926757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.927021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.927053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.927326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.927618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.927651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.927814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.928318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.928724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.928922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.929219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.929507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.929540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.929775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.930096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.930129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.930352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.930671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.930704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.930979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.931249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.931280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.931589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.931809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.931841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.932005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.932243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.932260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.932532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.932710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.932726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.932990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.933252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.933283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.933488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.933779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.933811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.934137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.934338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.934370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.934574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.934866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.934897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.015 qpair failed and we were unable to recover it. 00:24:23.015 [2024-05-15 03:18:53.935072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.935343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.015 [2024-05-15 03:18:53.935374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.935648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.935915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.935932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.936205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.936492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.936526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.936810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.937014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.937047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.937250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.937531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.937565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.937780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.937996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.938028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.938306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.938452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.938495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.938773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.938919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.938950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.939173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.939350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.939381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.939655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.939969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.939999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.940301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.940570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.940605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.940772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.941042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.941074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.941278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.941439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.941481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.941728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.942347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.942671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.942899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.943186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.943433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.943479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.943677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.943843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.943874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.944015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.944184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.944200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.944368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.944630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.944664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.944813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.945026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.945059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.945269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.945484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.945517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.945754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.946067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.946098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.946355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.946553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.946573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.946788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.947062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.947093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.947384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.947627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.947660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.016 [2024-05-15 03:18:53.947915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.948150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.016 [2024-05-15 03:18:53.948180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.016 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.948402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.948560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.948593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.948865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.949058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.949089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.949301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.949570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.949603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.949755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.950381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.950711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.950922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.951083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.951300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.951331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.951593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.951828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.951844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.951972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.952157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.952174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.952433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.952743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.952776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.952948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.953193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.953225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.953393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.953666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.953700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.953995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.954266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.954297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.954460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.954692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.954724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.954938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.955142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.955174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.955455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.955632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.955649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.955869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.956113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.956144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.956373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.956584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.956617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.956846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.956982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.957012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.957151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.957440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.957487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.957668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.957940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.957972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.958253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.958487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.958520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.958750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.959035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.959066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.959341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.961158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.961201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.961525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.961727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.961747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.962042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.962232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.962249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.962513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.962706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.962743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.962972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.963221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.963256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.963537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.963690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.963721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.017 [2024-05-15 03:18:53.963936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.964118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.017 [2024-05-15 03:18:53.964135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.017 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.964296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.964401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.964423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.964637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.964839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.964871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.965079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.965308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.965339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.965588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.966682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.966718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.967028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.967223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.967239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.967510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.967712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.967732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.967932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.968168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.968200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.968511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.968723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.968763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.968977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.969255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.969272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.969568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.969719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.969751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.970014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.970281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.970311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.970609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.970881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.970913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.971184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.971373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.971389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.971638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.971881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.971914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.972100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.972414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.972430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.972696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.972940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.972957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.973175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.973355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.973372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.973549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.973752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.973789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.973925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.974323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.974800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.974984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.975137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.975285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.975317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.975491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.975696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.975728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.975947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.976250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.976590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.976820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.977018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.977463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.977807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.977945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.018 [2024-05-15 03:18:53.978127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.978297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.018 [2024-05-15 03:18:53.978314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.018 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.978493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.978660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.978677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.978835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.978962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.978980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.979168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.979413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.979752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.979928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.980119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.980291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.980322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.980525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.980731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.980763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.980897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.981226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.981649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.981808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.981947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.982163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.982194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.982480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.982753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.982784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.982939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.983365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.983675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.983809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.983907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.984343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.984721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.984873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.985079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.985301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.985332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.985651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.985794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.985826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.985963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.986250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.986281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.986486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.986648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.986679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.986834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.988734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.988771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.988929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.989311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.989796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.989966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.990102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.990253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.990285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.990426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.990601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.990633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.990932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.991128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.991167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.991381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.991527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.019 [2024-05-15 03:18:53.991559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.019 qpair failed and we were unable to recover it. 00:24:23.019 [2024-05-15 03:18:53.991807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.992019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.992036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.992157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.992429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.992463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.992751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.992991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.993022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.993149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.993305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.993335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.993492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.993694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.993725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.993865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.994255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.994653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.994821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.995025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.995380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.995412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.995657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.995795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.995827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.996054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.996319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.996335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.996579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.996878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.996909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.997063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.997213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.997243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.997453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.997680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.997710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.997848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.997991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.998022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.998251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.998539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.998557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.998728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.999201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.999610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:53.999729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:53.999866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.000140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.000173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.000377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.000585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.000618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.000914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.001211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.001227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.001401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.001703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.001720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.001889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.002131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.002162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.002441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.002656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.002688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.020 qpair failed and we were unable to recover it. 00:24:23.020 [2024-05-15 03:18:54.002930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.003203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.020 [2024-05-15 03:18:54.003219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.003402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.003588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.003605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.003714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.003931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.003946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.004235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.004540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.004573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.004771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.004988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.005020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.005182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.005410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.005441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.005772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.005933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.005964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.006234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.006381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.006398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.006681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.006812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.006844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.007199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.007404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.007436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.007715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.007983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.008014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.008302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.008527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.008560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.008795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.009214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.009742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.009890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.010165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.010415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.010446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.010665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.010875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.010906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.011140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.011381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.011412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.011671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.011915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.011946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.012161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.012428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.012459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.012764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.013057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.013090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.013253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.013545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.013579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.013790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.014059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.014091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.014385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.014620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.014654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.014896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.015110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.015148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.015461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.015696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.015728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.015950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.016227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.016257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.016415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.016573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.016606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.016895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.017046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.017077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.017410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.017580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.017613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.021 qpair failed and we were unable to recover it. 00:24:23.021 [2024-05-15 03:18:54.017906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.018219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.021 [2024-05-15 03:18:54.018250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.018553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.018765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.018797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.018957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.019258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.019291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.019570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.019793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.019825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.019985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.020221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.020252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.020478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.020685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.020718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.021013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.021170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.021202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.021522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.021737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.021770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.021974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.022134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.022167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.022428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.022627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.022645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.022885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.023024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.023056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.023285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.023546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.023578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.023857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.024182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.024214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.024347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.024636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.024669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.025017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.025244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.025276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.025486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.025729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.025760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.025995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.026266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.026283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.026519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.026701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.026718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.026906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.027067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.027100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.027395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.027605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.027638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.027862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.028163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.028194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.028400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.028686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.028719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.028933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.029069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.029102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.029263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.029503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.029536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.029758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.029996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.030028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.030338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.030655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.030690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.030914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.031224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.031254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.031527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.031695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.031727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.031874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.032092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.022 [2024-05-15 03:18:54.032123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.022 qpair failed and we were unable to recover it. 00:24:23.022 [2024-05-15 03:18:54.032341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.032548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.032582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.032736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.032887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.032919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.033232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.033461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.033524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.033696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.033904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.033953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.034200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.034367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.034398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.034642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.034866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.034898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.035180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.035383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.035420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.035664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.035942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.035974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.036214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.036389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.036422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.036632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.036775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.036806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.037040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.037333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.037363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.037577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.037740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.037771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.037945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.038235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.038268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.038567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.038710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.038727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.038861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.039037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.039053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.039223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.039406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.039438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.039712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.039982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.040018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.040308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.040535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.040568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.040733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.041212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.041727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.041983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.042210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.042412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.042444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.042739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.042957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.042989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.043281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.043487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.043520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.043782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.043935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.043978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.044269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.044488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.044521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.044733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.044979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.045011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.045310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.045506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.045539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.045834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.046057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.046089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.023 [2024-05-15 03:18:54.046363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.046534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.023 [2024-05-15 03:18:54.046566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.023 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.046772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.046993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.047025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.047268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.047511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.047544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.047823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.048291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.048746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.048996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.049333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.049559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.049577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.049720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.049904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.049936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.050101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.050394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.050426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.050662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.050956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.050988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.051285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.051472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.051490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.051623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.051752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.051790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.052026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.052243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.052275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.052552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.052721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.052737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.052905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.053098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.053130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.053432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.053776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.053810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.053967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.054212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.054243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.054461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.054582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.054627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.054786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.055283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.055715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.055958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.056220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.056450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.056492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.056730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.057206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.057788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.057991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.058108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.058297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.058328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.058541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.058811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.058842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.059058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.059328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.024 [2024-05-15 03:18:54.059358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.024 qpair failed and we were unable to recover it. 00:24:23.024 [2024-05-15 03:18:54.059579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.059778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.059821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.060103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.060334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.060366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.060588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.060827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.060859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.061088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.061330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.061361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.061629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.061848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.061880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.062138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.063412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.063450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.063745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.063919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.063936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.064460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.064718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.064737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.064950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.065309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.065799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.065956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.066243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.066434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.066452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.066654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.066788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.066804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.067012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.067303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.067319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.067521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.067667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.067683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.067935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.068223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.068239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.068371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.068629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.068646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.068863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.069216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.069876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.070097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.070348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.070365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.070583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.070764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.070781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.070919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.071377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.071723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.071929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.072211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.072400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.072416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.072656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.072857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.072874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.025 qpair failed and we were unable to recover it. 00:24:23.025 [2024-05-15 03:18:54.073057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.025 [2024-05-15 03:18:54.073330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.073347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.073474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.073600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.073616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.073814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.073933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.073950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.074219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.074345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.074362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.074612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.074927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.074960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.075261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.075590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.075624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.075804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.076045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.076077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.076383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.076603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.076636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.076809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.077378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.077798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.077949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.078067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.078255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.078272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.078457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.078666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.078698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.078926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.079079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.079096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.079287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.079545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.079579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.079805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.080106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.080138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.080460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.080712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.080744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.080952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.081102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.081118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.081411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.081613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.081630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.081784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.081982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.082013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.082256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.082397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.082429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.082593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.082783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.082824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.083000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.083337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.083681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.083824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.083947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.084131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.084147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.084397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.084664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.084697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.084854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.085320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.085732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.026 [2024-05-15 03:18:54.085972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.026 qpair failed and we were unable to recover it. 00:24:23.026 [2024-05-15 03:18:54.086188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.086371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.086403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.086630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.086834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.086865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.087077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.087564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.087823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.087970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.088193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.088409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.088440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.088691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.088812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.088829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.088970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.089185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.089234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.089495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.089765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.089797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.090073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.090343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.090373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.090632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.090906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.090923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.091214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.091434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.091452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.091811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.091829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.091973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.092228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.092245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.092430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.092694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.092712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.092921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.093126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.093143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.093381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.093651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.093669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.093859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.093990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.094007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.094327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.094500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.094517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.094730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.094906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.094923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.095127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.095387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.095404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.095674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.095916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.095932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.096037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.096335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.096352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.096589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.096782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.096798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.096971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.097146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.097161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.097366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.097607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.097624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.097756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.097994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.098010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.098192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.098309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.098326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.098606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.098787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.098803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.027 qpair failed and we were unable to recover it. 00:24:23.027 [2024-05-15 03:18:54.099068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.099371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.027 [2024-05-15 03:18:54.099404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.099760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.099966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.099998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.100229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.100399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.100431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.100765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.100996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.101028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.101313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.101647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.101680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.101906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.102124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.102155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.102401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.102526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.102547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.102739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.103000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.103031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.103268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.103488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.103521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.103748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.103989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.104020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.104169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.104435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.104496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.104657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.104809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.104841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.105067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.105374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.105405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.105710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.105912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.105942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.106161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.106389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.106433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.106738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.106918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.106936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.107190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.107380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.107399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.107648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.107803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.107835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.108073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.108384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.108416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.108633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.108791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.108807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.109075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.109233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.109265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.109511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.109806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.109837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.110006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.110211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.110228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.110430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.110672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.110689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.110946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.111210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.111226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.111457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.111700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.111717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.111972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.112304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.112337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.112676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.112935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.112967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.113192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.113491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.113523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.028 [2024-05-15 03:18:54.113750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.113987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.028 [2024-05-15 03:18:54.114019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.028 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.114181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.114501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.114533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.114687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.114840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.114871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.115149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.115419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.115451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.115627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.115824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.115855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.116082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.116301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.116333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.116540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.116734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.116751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.116982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.117225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.117257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.117508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.117724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.117742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.117866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.118187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.118219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.118523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.118770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.118802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.119050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.119252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.119284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.119437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.119652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.119684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.119905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.120115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.120146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.120366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.120633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.120666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.120895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.121100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.121132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.121336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.121554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.121587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.121734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.121975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.122006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.122254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.122421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.122474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.122617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.122786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.122803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.122960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.123134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.123150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.123400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.123591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.123623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.123871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.124162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.124194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.124433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.124731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.124764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.124973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.125258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.125289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.125513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.125740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.125772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.126026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.126191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.126223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.126498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.126638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.029 [2024-05-15 03:18:54.126655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.029 qpair failed and we were unable to recover it. 00:24:23.029 [2024-05-15 03:18:54.126837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.126948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.126968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.127218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.127328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.127344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.127521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.127760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.127778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.127965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.128386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.128799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.128986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.129241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.129532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.129566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.129813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.130002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.130034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.130349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.130573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.130591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.130759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.131026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.131057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.131365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.131572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.131605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.131788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.132008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.132039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.132284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.132623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.132657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.132879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.133024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.133055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.133345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.133597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.133630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.133791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.134025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.134060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.134287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.134521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.134553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.134717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.134989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.135021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.135237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.135375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.135407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.135696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.135887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.135918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.136271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.136631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.136663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.136943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.137368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.137740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.137929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.138108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.138231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.138262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.138537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.138762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.138793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.139004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.139139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.139169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.139497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.139746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.139763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.139954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.140095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.140127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.140364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.140585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.140630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.140859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.141238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.030 qpair failed and we were unable to recover it. 00:24:23.030 [2024-05-15 03:18:54.141730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.030 [2024-05-15 03:18:54.141995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.142309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.142487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.142521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.142724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.142951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.142982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.143277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.143486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.143520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.143688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.143907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.143938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.144100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.144302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.144346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.144632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.144782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.144813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.145021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.145229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.145260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.145509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.145743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.145775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.145944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.146168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.146199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.146502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.146624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.146642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.146900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.147032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.147049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.147255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.147491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.147523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.147774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.148073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.148105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.148400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.148612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.148645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.148825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.149050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.149081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.149383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.149616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.149649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.149877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.150134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.150166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.150440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.150746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.150779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.151011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.151251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.151288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.151613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.151749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.151766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.152013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.152233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.152263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.152575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.152698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.152715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.152847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.153121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.153153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.153379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.153555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.153589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.153865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.154271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.154688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.154825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.154938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.155335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.155708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.155909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.156004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.156294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.156310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.156634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.156824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.031 [2024-05-15 03:18:54.156854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.031 qpair failed and we were unable to recover it. 00:24:23.031 [2024-05-15 03:18:54.157183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.157486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.157520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.157745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.158336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.158727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.158916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.159053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.159297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.159332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.159627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.159955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.159986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.160204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.160495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.160513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.032 [2024-05-15 03:18:54.160652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.160768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.032 [2024-05-15 03:18:54.160785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.032 qpair failed and we were unable to recover it. 00:24:23.304 [2024-05-15 03:18:54.160954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.304 [2024-05-15 03:18:54.161135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.161152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.161353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.161481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.161499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.161780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.161911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.161928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.162153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.162256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.162272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.162540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.162681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.162696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.162851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.162989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.163006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.163222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.163401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.163419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.163612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.163743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.163759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.163951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.164449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.164806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.164981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.165291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.165529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.165546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.165678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.166665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.166701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.166859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.167127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.167159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.167431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.167718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.167751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.168336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.168830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.168983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.169300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.169540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.169575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.169745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.169963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.170002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.170337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.170684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.170703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.170895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.172323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.172361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.172610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.172874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.172891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.305 qpair failed and we were unable to recover it. 00:24:23.305 [2024-05-15 03:18:54.173843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.174130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.305 [2024-05-15 03:18:54.174152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.174373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.175269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.175304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.175539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.175750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.175781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.176036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.176533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.176836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.176991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.177118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.177323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.177363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.177552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.177717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.177750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.177921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.178400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.178761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.178983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.179163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.179347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.179381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.179550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.179722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.179754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.179912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.181092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.181130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.181493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.181642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.181659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.181947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.182180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.182197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.182446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.182758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.182791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.183034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.183344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.183361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.183509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.183772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.183789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.183911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.184157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.184174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.184312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.184577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.184593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.184782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.185097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.185129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.185439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.185655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.185673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.185866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.186300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.186686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.186877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.187089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.187314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.187346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.187530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.187724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.306 [2024-05-15 03:18:54.187770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.306 qpair failed and we were unable to recover it. 00:24:23.306 [2024-05-15 03:18:54.187985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.188243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.188274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.188510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.188708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.188744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.188898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.189143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.189175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.189572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.189713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.189730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.189912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.190236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.190732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.190987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.191227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.191505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.191539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.191700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.191858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.191890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.192218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.192520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.192555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.192776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.192926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.192957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.193194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.193360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.193391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.193613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.193750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.193782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.194008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.194306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.194339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.194590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.194805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.194839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.195017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.195227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.195259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.195511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.195732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.195749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.195985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.196179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.196196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.196434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.196600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.196617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.196844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.197383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.197706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.197915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.198099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.198237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.198269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.198498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.198659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.198691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.198917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.199160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.199193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.199408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.199714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.199759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.307 qpair failed and we were unable to recover it. 00:24:23.307 [2024-05-15 03:18:54.199901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.307 [2024-05-15 03:18:54.200094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.200127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.200364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.200616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.200649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.200808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.200982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.201015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.201269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.201574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.201590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.201753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.202071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.202087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.202377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.202514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.202531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.202744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.203013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.203044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.203288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.203654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.203687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.203907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.204138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.204170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.204452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.204702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.204719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.204844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.205100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.205132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.205428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.205623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.205658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.205981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.206213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.206244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.206563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.206688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.206705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.206896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.207030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.207047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.207819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.208050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.208071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.208321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.208551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.208803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.209267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.209728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.209913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.210125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.210409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.210442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.210608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.210916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.211240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.211462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.308 [2024-05-15 03:18:54.211510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.308 qpair failed and we were unable to recover it. 00:24:23.308 [2024-05-15 03:18:54.211736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.211944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.211976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.212989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.213232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.213251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.213448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.213601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.213620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.213829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.214353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.214749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.214913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.215080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.215349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.215381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.215587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.215739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.215771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.215932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.216097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.216129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.216440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.216671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.216704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.216872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.217079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.217110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.217319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.217591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.217625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.217855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.218015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.218047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.218298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.218523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.218556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.218729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.219249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.219685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.219811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.219940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.220304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.220671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.220923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.221157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.221367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.221399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.222428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.222724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.222748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.309 qpair failed and we were unable to recover it. 00:24:23.309 [2024-05-15 03:18:54.222894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.223148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.309 [2024-05-15 03:18:54.223181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.223486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.223671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.223701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.223929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.224177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.224195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.225002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.225222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.225239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.225517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.225712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.225729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.225865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.226092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.226123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.226347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.226570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.226603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.226782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.227278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.227651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.227895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.228137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.228411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.228442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.228684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.228844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.228876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.229108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.229272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.229304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.229504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.229622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.229654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.229873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.230268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.230708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.230943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.231180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.231333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.231364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.231497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.231688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.231720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.231879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.232044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.232075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.232295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.232457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.232501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.232702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.232992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.310 [2024-05-15 03:18:54.233023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.310 qpair failed and we were unable to recover it. 00:24:23.310 [2024-05-15 03:18:54.233176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.234312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.234347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.234577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.234772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.234789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.235766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.235922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.235942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.236071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.236323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.236355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.236526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.236678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.236709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.236856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.237130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.237161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.237369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.237590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.237607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.237793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.238300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.238749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.238906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.239009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.239117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.239134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.239263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.239577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.239609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.239774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.240259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.240516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.240682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.240820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.241271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.241676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.241839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.241977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.242250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.242287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.242517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.242720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.242751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.242907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.243225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.243544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.243794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.243910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.311 qpair failed and we were unable to recover it. 00:24:23.311 [2024-05-15 03:18:54.244076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.311 [2024-05-15 03:18:54.244311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.244328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.244454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.244646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.244663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.244779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.244929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.244946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.245258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.245488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.245521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.245684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.245854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.245873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.246011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.246119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.246136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.246311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.246404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.246420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.247623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.247910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.247930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.248148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.248335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.248352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.248480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.248736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.248773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.248936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.249305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.249789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.249937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.250045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.250374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.250849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.250997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.251239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.251453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.251503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.251736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.251907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.251951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.253137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.253157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.253330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.253521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.253554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.253829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.254219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.254610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.254795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.255096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.255306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.255322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.312 [2024-05-15 03:18:54.255541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.255669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.312 [2024-05-15 03:18:54.255701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.312 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.256003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.256215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.256247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.256406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.256673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.256706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.256927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.257371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.257807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.257994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.258132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.258259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.258290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.258444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.258786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.258821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.259033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.259253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.259286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.259419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.259631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.259665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.259910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.260288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.260773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.260933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.261133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.261352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.261384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.261541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.261690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.261723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.261870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.262396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.262695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.262834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.262981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.263344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.263804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.263967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.264131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.264402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.264437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.264675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.264825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.264842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.265033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.265404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.265779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.265884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.313 [2024-05-15 03:18:54.266057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.266164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.313 [2024-05-15 03:18:54.266180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.313 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.266360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.266586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.266622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.266749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.267184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.267672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.267891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.268035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.268247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.268278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.268418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.268565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.268598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.270287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.270597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.270636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.270796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.270955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.270987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.271106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.271241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.271272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.271422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.271601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.271633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.271913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.272209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.272669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.272895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.273053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.273385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.273819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.273976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.274177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.274541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.274581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.274737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.275432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.275461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.275722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.275832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.275848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.276018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.276219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.276251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.276381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.276591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.276624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.276833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.277300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.277663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.277820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.278023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.278312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.314 qpair failed and we were unable to recover it. 00:24:23.314 [2024-05-15 03:18:54.278681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.314 [2024-05-15 03:18:54.278852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.278984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.279353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.279692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.279868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.280064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.280257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.280288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.280508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.280663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.280695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.280899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.281222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.281590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.281746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.281993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.282353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.282676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.282988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.283117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.283403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.283796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.283917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.284021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.284253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.284557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.284724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.315 qpair failed and we were unable to recover it. 00:24:23.315 [2024-05-15 03:18:54.284922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.315 [2024-05-15 03:18:54.285064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.285094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.285224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.285360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.285396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.285529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.285660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.285676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.286519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.286717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.286735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.286866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.287264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.287636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.287794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.287929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.288371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.288661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.288937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.289079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.289394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.289778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.289996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.290127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.290326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.290357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.290567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.290703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.290736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.290895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.291215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.291508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.291877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.291997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.292028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.292267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.292412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.292451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.292611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.292742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.292772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.292983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.293489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.293795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.293952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.294092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.294214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.316 [2024-05-15 03:18:54.294245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.316 qpair failed and we were unable to recover it. 00:24:23.316 [2024-05-15 03:18:54.294371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.294563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.294595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.294743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.294957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.294973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.295119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.295264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.295295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.295474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.295736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.295767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.295984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.296375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.296669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.296834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.296931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.297361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.297733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.297901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.298102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.298509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.298820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.298984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.299177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.299313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.299344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.299487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.299636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.299668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.299894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.300262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.300648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.300817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.300959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.301433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.301789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.301979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.302147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.302261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.302292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.302424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.302701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.302733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.302857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.303320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.317 qpair failed and we were unable to recover it. 00:24:23.317 [2024-05-15 03:18:54.303645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.317 [2024-05-15 03:18:54.303881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.304076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.304373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.304767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.304920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.305060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.305313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.305581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.305818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.305999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.306137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.306275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.306305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.306459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.306625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.306657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.306874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.307001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.307032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.307161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.307288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.307319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.307447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.308335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.308364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.308481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.308671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.308686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.308798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.309316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.309621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.309783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.309912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.310223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.310630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.310834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.310983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.311129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.311318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.311348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.311484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.311620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.311651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.311868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.312184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.312482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.312742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.312861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.312974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.313104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.313134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.318 qpair failed and we were unable to recover it. 00:24:23.318 [2024-05-15 03:18:54.313261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.313406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.318 [2024-05-15 03:18:54.313436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.313639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.313862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.313892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.314090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.314522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.314819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.314928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.315093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.315378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.315693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.315891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.316031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.316328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.316646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.316792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.316898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.317243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.317594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.317881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.317998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.318028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.318232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.318365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.318401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.318608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.318802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.318832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.319049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.319256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.319285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.319622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.319652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.319848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.320247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.320544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.320749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.320890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.321217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.321667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.321784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.322037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.322365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.322702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.319 [2024-05-15 03:18:54.322920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.319 qpair failed and we were unable to recover it. 00:24:23.319 [2024-05-15 03:18:54.323056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.323146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.323161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.323366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.323620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.323652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.323816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.323986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.324185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.324541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.324858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.324992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.325125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.325246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.325276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.325536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.325662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.325693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.325830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.326186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.326525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.326700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.326908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.327207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.327543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.327764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.327886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.328275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.328659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.328831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.329138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.329441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.320 [2024-05-15 03:18:54.329836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.320 [2024-05-15 03:18:54.329999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.320 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.330197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.330317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.330348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.330508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.330647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.330678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.330807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.330992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.331165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.331463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.331772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.331939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.332147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.332371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.332607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.332732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.332906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.333148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.333424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.333637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.333830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.333998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.334412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.334733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.334930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.335039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.335278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.335690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.335915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.336067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.336383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.336722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.336939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.337129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.337328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.337359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.337561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.337756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.337787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.337933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.338223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.338725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.338907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.339106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.339224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.321 [2024-05-15 03:18:54.339253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.321 qpair failed and we were unable to recover it. 00:24:23.321 [2024-05-15 03:18:54.339380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.339510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.339542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.339754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.340535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.340561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.340805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.340932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.340963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.341179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.341386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.341416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.341649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.341799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.341829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.342032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.342165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.342195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.342329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.342590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.342621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.342905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.343127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.343372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.343673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.343912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.344337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.344686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.344886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.344998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.345227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.345493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.345708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.345849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.345996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.346130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.346160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.346352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.346607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.346640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.346782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.347173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.347490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.347800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.347978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.348237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.348370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.348402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.322 qpair failed and we were unable to recover it. 00:24:23.322 [2024-05-15 03:18:54.348559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.322 [2024-05-15 03:18:54.348692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.348723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.348917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.349215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.349516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.349824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.349982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.350160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.350367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.350398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.350537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.351450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.351487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.351673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.351887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.351906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.352639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.352852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.352886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.353141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.353352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.353383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.353627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.353752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.353783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.353976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.354269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.354716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.354869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.355008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.355314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.355725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.355891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.356014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.356299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.356626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.356853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.356979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.357065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.357080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.357164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.358268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.358293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.358411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.358588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.358604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.323 [2024-05-15 03:18:54.358780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.358976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.323 [2024-05-15 03:18:54.359006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.323 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.359143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.359292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.359322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.359453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.359590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.359631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.359828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.360276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.360655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.360889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.361078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.361334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.361365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.361545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.361765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.361797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.361928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.362283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.362651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.362815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.362968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.363331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.363706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.363928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.364075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.364332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.364710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.364962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.365103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.365390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.365751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.365935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.366035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.366250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.366461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.366771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.366886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.367040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.367241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.367271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.367421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.367569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.324 [2024-05-15 03:18:54.367602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.324 qpair failed and we were unable to recover it. 00:24:23.324 [2024-05-15 03:18:54.367739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.367878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.367908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.368042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.368386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.368820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.368996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.369184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.369318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.369348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.369558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.369766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.369797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.369940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.370300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.370650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.370816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.370950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.371241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.371488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.371783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.371970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.372135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.372403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.372737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.372845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.372943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.373172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.373345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.373544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.373751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.373924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.374084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.374416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.374639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.374814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.325 qpair failed and we were unable to recover it. 00:24:23.325 [2024-05-15 03:18:54.374971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.375142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.325 [2024-05-15 03:18:54.375157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.375240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.375515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.375866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.375975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.376087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.376315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.376519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.376693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.376859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.377218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.377513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.377687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.377846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.378200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.378448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.378671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.378850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.378943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.379046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.379060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.379165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.379323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.379338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Write completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 Read completed with error (sct=0, sc=8) 00:24:23.326 starting I/O failed 00:24:23.326 [2024-05-15 03:18:54.379653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:23.326 [2024-05-15 03:18:54.379806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa02770 is same with the state(5) to be set 00:24:23.326 [2024-05-15 03:18:54.379971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.380075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.326 [2024-05-15 03:18:54.380090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.326 qpair failed and we were unable to recover it. 00:24:23.326 [2024-05-15 03:18:54.380254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.380435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.380703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.380872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.380979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.381082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.381319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.381599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.381696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.381825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.382307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.382498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.382616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.382843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.383345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.383682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.383830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.384012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.384360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.384652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.384834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.385057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.385334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.385681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.385766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.385923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.386160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.386439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.386807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.386910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.387086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.387288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.387720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.387954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.388090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.388223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.388253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.388444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.388669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.388700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.327 qpair failed and we were unable to recover it. 00:24:23.327 [2024-05-15 03:18:54.388856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.327 [2024-05-15 03:18:54.389114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.389144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.389269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.389506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.389539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.389732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.389875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.389906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.390105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.390403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.390703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.390861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.390988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.391444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.391761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.391908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.392138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.392508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.392871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.392987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.393151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.393437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.393748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.393895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.394122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.394320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.394350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.394495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.394749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.394784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.395048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.395193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.395224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.395368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.395507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.395539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.395741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.395997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.396028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.396170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.396427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.396457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.396693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.397168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.397554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.397718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.397977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.398287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.398655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.398814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.399010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.399134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.399163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.399332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.399446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.328 [2024-05-15 03:18:54.399488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.328 qpair failed and we were unable to recover it. 00:24:23.328 [2024-05-15 03:18:54.399634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.399765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.399795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.399968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.400307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.400743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.400922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.401043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.401341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.401761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.401943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.402052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.402319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.402653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.402798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.403005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.403431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.403636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.403836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.404041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.404304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.404334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.404509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.404638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.404668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.404897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.405311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.405713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.405951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.406159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.406344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.406374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.406605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.406801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.406831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.407079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.407190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.407220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.407483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.407690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.407738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.407939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.408241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.408561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.408735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.408912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.409234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.409605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.409854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.410048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.410183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.410199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.329 qpair failed and we were unable to recover it. 00:24:23.329 [2024-05-15 03:18:54.410456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.329 [2024-05-15 03:18:54.410626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.410656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.410853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.410992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.411113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.411447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.411768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.411981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.412176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.412324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.412355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.412562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.412689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.412720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.412930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.413266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.413704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.413929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.414248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.414401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.414416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.414570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.414656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.414670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.414848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.415180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.415544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.415834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.415939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.416194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.416331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.416361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.416640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.416831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.416860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.417050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.417431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.417796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.417946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.418149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.418368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.418398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.418605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.418736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.418766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.418978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.419254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.419284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.419404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.419601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.419633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.330 [2024-05-15 03:18:54.419847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.420100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.330 [2024-05-15 03:18:54.420130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.330 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.420391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.420527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.420558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.420786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.421053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.421083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.421299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.421573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.421604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.421871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.422341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.422774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.422986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.423187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.423385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.423415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.423548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.423737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.423767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.423873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.424197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.424687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.424827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.425081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.425227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.425257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.425461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.425726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.425756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.425890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.426358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.426731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.426966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.427175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.427374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.427405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.427597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.427741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.427771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.427906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.428249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.428713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.428931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.429224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.429431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.429461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.429667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.429873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.429903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.430095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.430267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.430297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.430501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.430694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.430724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.430940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.431158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.431173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.431363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.431491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.431522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.431723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.431974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.432004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.331 qpair failed and we were unable to recover it. 00:24:23.331 [2024-05-15 03:18:54.432194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.331 [2024-05-15 03:18:54.432459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.432512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.432723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.432951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.432982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.433117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.433328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.433359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.433486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.433608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.433639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.433846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.434259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.434730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.434901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.435091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.435194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.435207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.435325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.435517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.435549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.435781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.436178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.436633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.436877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.437107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.437295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.437309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.437485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.437690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.437722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.437861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.438160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.438567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.438828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.439029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.439417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.439730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.439903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.440178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.440442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.440481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.440736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.440872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.440902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.441116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.441236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.441267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.441482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.441781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.441811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.442015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.442423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.442734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.442873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.443082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.443300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.332 [2024-05-15 03:18:54.443330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.332 qpair failed and we were unable to recover it. 00:24:23.332 [2024-05-15 03:18:54.443533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.443635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.443651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.443836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.444240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.444810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.444984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.445101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.445318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.445333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.445515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.445704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.445734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.445887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.446258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.446711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.446900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.447075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.447354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.447384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.447526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.447727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.447757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.448034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.448288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.448318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.448566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.448669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.448684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.448850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.449432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.449787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.449951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.450168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.450374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.450404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.450628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.450797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.450812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.451229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.451259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.451455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.451607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.451643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.451857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.452136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.452150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.333 [2024-05-15 03:18:54.452332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.452453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.333 [2024-05-15 03:18:54.452477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.333 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.452641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.452752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.452767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.453040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.453384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.453796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.453971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.454073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.454288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.454700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.454939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.455109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.455451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.455743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.455950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.456093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.456296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.456325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.456541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.456697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.456727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.457007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.457288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.457303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.457401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.457558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.457574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.457808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.457987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.458001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.458177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.458293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.458328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.458587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.458743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.458773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.458975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.459105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.459135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.459343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.459534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.459565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.459765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.459970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.460000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.460141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.460417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.460447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.460595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.460849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.460879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.461014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.461165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.609 [2024-05-15 03:18:54.461195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.609 qpair failed and we were unable to recover it. 00:24:23.609 [2024-05-15 03:18:54.461429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.461708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.461739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.462021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.462221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.462250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.462476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.462695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.462725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.462916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.463349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.463702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.463921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.464134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.464418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.464433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.464537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.464648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.464662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.464831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.465140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.465504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.465696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.465885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.466261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.466564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.466779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.466979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.467113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.467142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.467401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.467636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.467668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.467824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.468252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.468700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.468932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.469126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.469488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.469769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.469942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.470115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.470369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.470400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.470539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.470682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.470712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.470845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.470978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.471007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.471264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.471481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.471522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.471727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.471917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.471947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.472236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.472416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.472431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.472545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.472769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.610 [2024-05-15 03:18:54.472784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.610 qpair failed and we were unable to recover it. 00:24:23.610 [2024-05-15 03:18:54.472895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.472999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.473014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.473176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.473329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.473359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.473649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.473906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.473936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.474139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.474406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.474435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.474589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.474807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.474836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.474989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.475272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.475303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.475510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.475658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.475689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.475885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.476024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.476053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.476272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.476480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.476511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.476793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.476973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.477003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.477279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.477453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.477503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.477785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.478299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.478765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.478980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.479180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.479374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.479404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.479657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.479819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.479849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.480054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.480470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.480810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.480936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.481081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.481285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.481314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.481519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.481712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.481742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.481873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.482373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.482737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.482921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.483129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.483485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.483772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.483936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.484199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.484449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.484472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.484649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.484828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.484860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.484998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.485195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.611 [2024-05-15 03:18:54.485225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.611 qpair failed and we were unable to recover it. 00:24:23.611 [2024-05-15 03:18:54.485427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.485658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.485690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.485832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.486347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.486683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.486821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.487034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.487378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.487694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.487936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.488060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.488281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.488312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.488591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.488796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.488825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.488966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.489228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.489474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.489858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.489995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.490025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.490222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.490412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.490442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.490692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.490893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.490922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.491065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.491275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.491304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.491476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.491607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.491636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.491841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.492235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.492577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.492723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.493008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.493315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.493721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.493968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.494121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.494459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.494724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.494911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.495120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.495379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.495762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.495955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.612 qpair failed and we were unable to recover it. 00:24:23.612 [2024-05-15 03:18:54.496247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.612 [2024-05-15 03:18:54.496386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.496416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.496644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.496764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.496778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.496890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.497258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.497618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.497797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.498026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.498254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.498621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.498914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.499106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.499399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.499741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.499930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.500126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.500316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.500345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.500476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.500714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.500729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.500950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.501208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.501590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.501791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.502000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.502146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.502176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.502386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.502653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.502684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.502853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.503335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.503744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.503895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.504041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.504325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.504355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.613 qpair failed and we were unable to recover it. 00:24:23.613 [2024-05-15 03:18:54.504613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.613 [2024-05-15 03:18:54.504726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.504755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.504902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.505179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.505565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.505797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.505989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.506388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.506809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.506986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.507129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.507293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.507308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.507413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.507601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.507632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.507806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.508250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.508703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.508868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.509014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.509198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.509227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.509492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.509743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.509774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.509980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.510231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.510260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.510516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.510669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.510684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.510927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.511340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.511682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.511859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.511967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.512288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.512693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.512865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.513061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.513189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.513203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.513411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.513689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.513719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.513977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.514403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.514712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.514941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.515137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.515309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.515339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.515546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.515700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.515715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.614 [2024-05-15 03:18:54.515877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.516116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.614 [2024-05-15 03:18:54.516145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.614 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.516282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.516530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.516560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.516821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.516968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.516998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.517248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.517385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.517415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.517688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.517886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.517900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.518129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.518272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.518301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.518512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.518701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.518715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.518882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.519069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.519098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.519389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.519572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.519603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.519857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.520391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.520753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.520927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.521160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.521343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.521358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.521506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.521702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.521731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.521988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.522236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.522610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.522813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.522989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.523202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.523231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.523425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.523669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.523700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.523831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.524131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.524161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.524288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.524540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.524571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.524851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.525230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.525616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.525790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.526013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.526376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.526750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.526923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.527115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.527250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.527280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.527501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.527714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.527744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.528009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.528164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.528193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.615 qpair failed and we were unable to recover it. 00:24:23.615 [2024-05-15 03:18:54.528404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.615 [2024-05-15 03:18:54.528513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.528544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.528696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.528911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.528941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.529150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.529370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.529399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.529598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.529766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.529796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.529920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.530111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.530141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.530336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.530473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.530504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.530757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.531231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.531551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.531734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.531822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.532119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.532533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.532767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.532985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.533530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.533773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.533951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.534210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.534351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.534380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.534570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.534770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.534799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.534936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.535084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.535113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.535368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.535592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.535606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.535857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.536339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.536722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.536948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.537209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.537430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.537460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.537609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.537790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.537804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.537908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.538259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.538691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.538969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.539101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.539386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.539416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.539690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.539895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.539925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.540205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.540457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.616 [2024-05-15 03:18:54.540495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.616 qpair failed and we were unable to recover it. 00:24:23.616 [2024-05-15 03:18:54.540684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.540869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.540899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.541128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.541350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.541379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.541509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.541736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.541766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.542028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.542219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.542249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.542458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.542593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.542623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.542847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.542991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.543021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.543162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.543413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.543442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.543713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.543991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.544005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.544189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.544361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.544389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.544617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.544754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.544783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.544982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.545166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.545195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.545323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.545580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.545595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.545769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.546033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.546063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.546270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.546529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.546544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.546793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.546993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.547023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.547297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.547514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.547529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.547636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.547793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.547808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.548054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.548485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.548766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.548995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.549135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.549423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.549452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.549754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.549887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.549917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.550121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.550284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.550298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.550469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.550670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.550685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.550865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.551230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.551729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.551876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.552109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.552383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.617 [2024-05-15 03:18:54.552418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.617 qpair failed and we were unable to recover it. 00:24:23.617 [2024-05-15 03:18:54.552517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.552704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.552718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.552944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.553300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.553648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.553819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.554083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.554281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.554310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.554514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.554726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.554741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.554864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.555284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.555671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.555973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.556178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.556388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.556418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.556574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.556735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.556749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.556923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.557230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.557651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.557953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.558173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.558456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.558495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.558687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.558798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.558828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.559026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.559159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.559188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.559484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.559608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.559638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.559833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.560297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.560640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.560829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.560924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.561171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.561201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.561401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.561594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.561625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.561843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.562078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.562108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.562331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.562610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.618 [2024-05-15 03:18:54.562641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.618 qpair failed and we were unable to recover it. 00:24:23.618 [2024-05-15 03:18:54.562850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.563143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.563172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.563375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.563602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.563654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.563868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.564330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.564665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.564890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.565034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.565245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.565284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.565500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.565726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.565756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.565948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.566358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.566715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.566853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.567041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.567174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.567202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.567489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.567684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.567713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.567993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.568181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.568210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.568435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.568659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.568675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.568846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.569189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.569536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.569703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.569892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.570361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.570706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.570868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.571080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.571284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.571314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.571448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.571645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.571676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.571880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.572281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.572700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.572815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.572966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.573365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.573719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.573844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.574115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.574257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.574299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.619 qpair failed and we were unable to recover it. 00:24:23.619 [2024-05-15 03:18:54.574473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.619 [2024-05-15 03:18:54.574632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.574662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.574870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.575291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.575706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.575814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.575929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.576347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.576669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.576843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.576961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.577310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.577747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.577977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.578255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.578508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.578539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.578668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.578916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.578945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.579227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.579520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.579551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.579831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.580187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.580553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.580825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.580954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.581275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.581728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.581865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.581971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.582182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.582489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.582617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.582857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.583115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.583386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.583747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.583866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.584017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.620 [2024-05-15 03:18:54.584031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.620 qpair failed and we were unable to recover it. 00:24:23.620 [2024-05-15 03:18:54.584205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.584292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.584307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.584474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.584698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.584712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.584896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.585264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.585671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.585855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.586020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.586391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.586771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.586928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.587042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.587277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.587291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.587461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.587657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.587671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.587932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.588148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.588433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.588815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.588924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.589013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.589288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.589726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.589959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.590165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.590293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.590323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.590457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.590624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.590654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.590865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.591277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.591641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.591831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.591988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.592184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.592213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.592452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.592603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.592633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.592914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.593262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.593707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.593857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.594062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.594275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.594306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.621 qpair failed and we were unable to recover it. 00:24:23.621 [2024-05-15 03:18:54.594457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.621 [2024-05-15 03:18:54.594602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.594631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.594851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.595237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.595650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.595930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.596156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.596298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.596327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.596600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.596784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.596814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.597073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.597284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.597314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.597519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.597643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.597674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.597868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.598472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.598812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.598916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.599018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.599194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.599224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.599362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.599531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.599562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.599822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.600176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.600502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.600788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.601022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.601224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.601253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.601456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.601651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.601680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.601872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.602143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.602172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.602306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.602592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.602621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.602836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.602979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.603009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.603267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.603537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.603568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.603782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.603906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.603935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.604154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.604376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.604405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.604636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.604827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.604855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-05-15 03:18:54.604990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.622 [2024-05-15 03:18:54.605212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.605241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.605448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.605659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.605690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.605971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.606321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.606566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.606743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.606866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.607228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.607568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.607888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.608101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.608299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.608329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.608535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.608723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.608762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.608931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.609177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.609207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.609410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.609603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.609618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.609798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.610086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.610116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.610272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.610394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.610423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.610728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.610999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.611029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.611256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.611413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.611443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.611671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.611921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.611951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.612149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.612359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.612389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.612579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.612699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.612714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.612894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.613269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.613762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.613911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.614165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.614367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.614396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.614550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.614747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.614776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.615085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.615341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.615370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.615517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.615719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.615748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.616011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.616227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.616256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.616459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.616638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.616669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.623 [2024-05-15 03:18:54.616808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.617035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.623 [2024-05-15 03:18:54.617064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.623 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.617204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.617431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.617460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.617658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.617866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.617895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.618124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.618310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.618339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.618456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.618632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.618646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.618805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.618981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.619010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.619196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.619319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.619348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.619637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.619770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.619785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.619878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.620189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.620596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.620813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.620960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.621158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.621186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.621440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.621571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.621601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.621748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.622002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.622032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.622221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.622478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.622508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.622718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.622998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.623027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.623209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.623435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.623482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.623662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.623778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.623807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.624018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.624205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.624235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.624488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.624655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.624685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.624944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.625138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.625168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.625309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.625519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.625549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.625740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.626242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.626619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.626778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.626977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.627203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.627233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.627511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.627664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.627693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.627899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.628062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.628091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.628347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.628627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.628657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.628863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.629166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.629202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.624 qpair failed and we were unable to recover it. 00:24:23.624 [2024-05-15 03:18:54.629404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.624 [2024-05-15 03:18:54.629576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.629607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.629753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.629941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.629971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.630164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.630357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.630387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.630602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.630796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.630810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.630983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.631342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.631641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.631824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.631921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.632135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.632650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.632944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.633049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.633215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.633244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.633447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.633666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.633697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.633977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.634105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.634136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.634419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.634639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.634654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.634826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.634995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.635024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.635240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.635381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.635410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.635631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.635908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.635938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.636077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.636263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.636292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.636500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.636633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.636663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.636916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.637291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.637773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.637890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.638012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.638287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.638317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.638507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.638779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.638793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.638949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.639346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.639771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.639935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.640108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.640261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.640290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.640555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.640706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.640735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-05-15 03:18:54.640934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.641143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.625 [2024-05-15 03:18:54.641178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.641382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.641591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.641606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.641769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.642260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.642696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.642930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.643137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.643302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.643332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.643469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.643704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.643718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.643899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.644239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.644728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.644900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.645134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.645411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.645441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.645615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.645763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.645777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.646028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.646395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.646760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.646973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.647188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.647387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.647417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.647579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.647732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.647747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.647926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.648342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.648686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.648894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.649163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.649360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.649389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.649582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.649703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.649717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.649885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.650300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.650767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.650926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.651182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.651458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.651507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.651666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.651928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.651957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.652184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.652389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.652420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.652623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.652764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.626 [2024-05-15 03:18:54.652794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-05-15 03:18:54.652922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.653263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.653723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.653936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.654226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.654415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.654443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.654651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.654833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.654862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.655002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.655139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.655169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.655498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.655702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.655732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.655997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.656186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.656215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.656426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.656687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.656718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.656941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.657104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.657133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.657344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.657545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.657576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.657767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.657975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.658005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.658293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.658498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.658528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.658718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.658887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.658916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.659121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.659256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.659285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.659490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.659744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.659773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.659960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.660292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.660772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.660876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.660968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.661329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.661747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.661896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.662151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.662317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.662348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.662495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.662693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.662723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.662922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.663060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.627 [2024-05-15 03:18:54.663075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.627 qpair failed and we were unable to recover it. 00:24:23.627 [2024-05-15 03:18:54.663289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.663396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.663411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.663570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.663730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.663744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.663869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.663999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.664029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.664311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.664539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.664570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.664698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.664843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.664872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.665085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.665486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.665802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.665961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.666090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.666436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.666731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.666911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.667024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.667189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.667205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.667375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.667550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.667580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.667866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.668232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.668670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.668918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.669100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.669293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.669322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.669586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.669759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.669790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.670005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.670204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.670233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.670496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.670639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.670667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.670809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.671387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.671693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.671917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.672057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.672329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.672359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.672565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.672702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.672739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.672916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.673212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.673502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.628 [2024-05-15 03:18:54.673828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.628 [2024-05-15 03:18:54.673991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.628 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.674124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.674319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.674348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.674608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.674860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.674898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.675059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.675445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.675774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.675927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.676091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.676249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.676263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.676510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.676714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.676744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.676932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.677359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.677728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.677973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.678140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.678304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.678319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.678438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.678636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.678651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.678830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.679202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.679585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.679839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.680037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.680313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.680343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.680534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.680672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.680701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.680883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.681196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.681564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.681813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.681995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.682098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.682112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.629 qpair failed and we were unable to recover it. 00:24:23.629 [2024-05-15 03:18:54.682287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.629 [2024-05-15 03:18:54.682446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.682483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.682687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.682806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.682835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.683022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.683247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.683276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.683406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.683600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.683631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.683933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.684375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.684754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.684954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.685185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.685385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.685415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.685654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.685795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.685824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.686079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.686438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.686781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.686987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.687187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.687314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.687343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.687541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.687731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.687745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.687973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.688275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.688729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.688876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.689030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.689386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.689799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.630 [2024-05-15 03:18:54.689935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.630 qpair failed and we were unable to recover it. 00:24:23.630 [2024-05-15 03:18:54.690121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.690302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.690332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.690523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.690733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.690763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.690966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.691255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.691803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.691922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.692038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.692328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.692632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.692851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.693053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.693254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.693283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.693485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.693687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.693717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.693919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.694123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.694153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.694431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.694641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.694671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.694882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.695231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.695672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.695902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.696092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.696283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.696312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.696456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.696685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.696714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.696969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.697234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.697252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.697421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.697593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.697608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.631 qpair failed and we were unable to recover it. 00:24:23.631 [2024-05-15 03:18:54.697766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.631 [2024-05-15 03:18:54.697986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.698016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.698275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.698540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.698571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.698795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.698994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.699023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.699175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.699331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.699346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.699462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.699722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.699752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.699951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.700155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.700185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.700390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.700578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.700608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.700808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.701152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.701387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.701727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.701892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.702089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.702397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.702704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.702943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.703236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.703365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.703394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.703533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.703727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.703756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.703953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.704160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.704190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.704390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.704554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.704585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.704792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.705243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.632 qpair failed and we were unable to recover it. 00:24:23.632 [2024-05-15 03:18:54.705675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.632 [2024-05-15 03:18:54.705796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.705961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.706248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.706535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.706824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.706989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.707263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.707456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.707506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.707668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.707919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.707947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.708177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.708444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.708482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.708699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.708945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.708974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.709147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.709364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.709393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.709553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.709828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.709857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.709996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.710179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.710209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.710472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.710671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.710701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.710950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.711343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.711639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.633 [2024-05-15 03:18:54.711882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.633 qpair failed and we were unable to recover it. 00:24:23.633 [2024-05-15 03:18:54.712012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.712215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.712244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.712524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.712780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.712809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.713036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.713188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.713217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.713450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.713662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.713692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.713922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.714208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.714667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.714798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.715007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.715334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.715676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.715876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.715982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.716295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.716642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.716795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.716926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.717267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.717669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.717866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.718070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.718298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.718677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.718917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.634 [2024-05-15 03:18:54.719092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.719289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.634 [2024-05-15 03:18:54.719305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.634 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.719422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.719610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.719625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.719722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.719840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.719854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.720016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.720138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.720168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.720431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.720562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.720593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.720851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.721207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.721570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.721794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.722001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.722335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.722759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.722915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.723120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.723205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.723240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.723445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.723604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.723634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.723891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.724322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.724809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.724954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.725066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.725282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.725681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.725854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.726128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.726225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.726239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.635 [2024-05-15 03:18:54.726403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.726598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.635 [2024-05-15 03:18:54.726628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.635 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.726887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.727156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.727508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.727749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.727918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.728297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.728725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.728924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.729181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.729327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.729356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.729487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.729639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.729669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.729859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.730699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.730727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.730844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.731207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.731744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.731912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.732126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.732268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.732297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.732566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.732754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.732784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.732922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.733348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.733735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.733892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.734095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.734307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.636 qpair failed and we were unable to recover it. 00:24:23.636 [2024-05-15 03:18:54.734514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.636 [2024-05-15 03:18:54.734710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.734912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.735252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.735578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.735752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.735914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.736208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.736506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.736736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.736860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.737041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.737524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.737717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.737892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.738028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.738289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.738579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.738756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.738845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.739198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.739416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.739692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.739933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.740108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.740407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.637 [2024-05-15 03:18:54.740684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.637 [2024-05-15 03:18:54.740785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.637 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.740867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.740962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.740976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.741064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.741173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.741188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.741340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.741566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.741581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.741826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.742295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.742515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.742693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.742911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.743177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.743431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.743694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.743767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.743948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.744285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.744461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.744797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.744914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.745064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.745316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.745575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.745881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.745977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.746059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.746232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.746244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.746346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.746463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.746479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.638 qpair failed and we were unable to recover it. 00:24:23.638 [2024-05-15 03:18:54.746571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.638 [2024-05-15 03:18:54.746722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.746734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.746859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.746957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.746969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.747128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.747403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.747734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.747925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.748078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.748489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.748748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.748925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.749036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.749342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.749728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.749845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.749998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.750449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.750725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.750904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.751020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.751184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.751196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.751384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.751473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.639 [2024-05-15 03:18:54.751486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.639 qpair failed and we were unable to recover it. 00:24:23.639 [2024-05-15 03:18:54.751567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.751692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.751704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.751768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.751864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.751876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.751977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.752219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.752498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.752772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.752875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.753049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.753207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.753219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.753313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.753413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.640 [2024-05-15 03:18:54.753428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.640 qpair failed and we were unable to recover it. 00:24:23.640 [2024-05-15 03:18:54.753580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.753690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.753703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.753799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.753891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.753902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.756472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.756749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.756762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.756928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.757251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.757576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.757706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.757799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.758195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.758445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.758715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.758815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.758909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.759212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.759407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.759660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.759820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.759980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.760149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.760402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.760819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.760896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.761063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.761408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.761697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.761827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.919 qpair failed and we were unable to recover it. 00:24:23.919 [2024-05-15 03:18:54.761958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.919 [2024-05-15 03:18:54.762056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.762317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.762656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.762871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.762984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.763067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.763377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.763664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.763763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.763863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.764185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.764550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.764773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.764923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.765425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.765718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.765878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.766020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.766202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.766231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.766424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.766598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.766628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.766821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.767291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.767669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.767909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.768063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.768412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.768726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.768876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.769003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.769187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.769217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.769425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.769686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.769717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.769909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.770183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.770475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.770657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.770858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.771394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.771813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.771975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.920 [2024-05-15 03:18:54.772177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.772345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.920 [2024-05-15 03:18:54.772375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.920 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.772636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.772743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.772753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.772837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.772997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.773185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.773454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.773767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.773861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.774010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.774173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.774184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.774418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.774695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.774726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.774879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.775151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.775162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.775323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.775551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.775583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.775846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.776089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.776118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.776374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.776624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.776656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.776783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.777191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.777629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.777862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.778049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.778169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.778179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.778390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.778603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.778615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.778895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.779260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.779685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.779910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.780136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.780252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.780283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.780508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.780681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.780710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.780900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.781053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.781082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.781338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.781639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.781670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.781942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.782207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.782237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.782520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.782774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.782803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.782946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.783202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.783232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.783441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.783676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.783707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.783842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.784015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.784027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.784134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.784250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.784261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.921 qpair failed and we were unable to recover it. 00:24:23.921 [2024-05-15 03:18:54.784383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.921 [2024-05-15 03:18:54.784632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.784663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.784856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.785231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.785733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.785944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.786137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.786397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.786426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.786738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.786940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.786970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.787194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.787417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.787446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.787638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.787791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.787820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.788078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.788375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.788748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.788963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.789154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.789362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.789392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.789593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.789723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.789753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.789939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.790400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.790821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.790985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.791121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.791267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.791296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.791552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.791805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.791835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.791981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.792345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.792656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.792851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.793015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.793295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.793476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.793741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.793905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.794009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.794285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.794300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.794471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.794589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.794627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.794774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.794989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.795020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.795129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.795328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.795339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.922 [2024-05-15 03:18:54.795487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.795652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.922 [2024-05-15 03:18:54.795682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.922 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.795888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.796327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.796795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.796932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.797226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.797486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.797517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.797662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.797849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.797879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.798085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.798239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.798270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.798507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.798614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.798650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.798795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.798984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.799014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.799202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.799339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.799370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.799542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.799821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.799851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.800060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.800287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.800317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.800522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.800725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.800754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.800951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.801097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.801126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.801336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.801514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.801544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.801847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.801995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.802025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.802149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.802280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.802310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.802477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.802686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.802716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.802907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.803267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.803717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.803934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.804121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.804367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.804728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.804889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.923 qpair failed and we were unable to recover it. 00:24:23.923 [2024-05-15 03:18:54.805041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.923 [2024-05-15 03:18:54.805238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.805268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.805410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.805594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.805625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.805886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.806109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.806138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.806450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.806613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.806644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.806797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.806996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.807025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.807225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.807495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.807526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.807728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.807839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.807869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.808140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.808243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.808257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.808505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.808702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.808732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.808858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.809269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.809723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.809850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.809943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.810282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.810730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.810950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.811097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.811248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.811260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.811516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.811720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.811749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.811971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.812134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.812169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.812323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.812601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.812631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.812761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.813255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.813451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.813704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.813898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.814184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.814485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.814711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.814913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.815135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.815483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.815708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.815900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.816089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.816119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.816312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.816518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.924 [2024-05-15 03:18:54.816549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.924 qpair failed and we were unable to recover it. 00:24:23.924 [2024-05-15 03:18:54.816827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.816975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.817004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.817267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.817453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.817491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.817620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.817748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.817777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.818015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.818237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.818267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.818424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.818641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.818671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.818865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.819235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.819684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.819972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.820233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.820443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.820483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.820747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.821309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.821722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.821957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.822157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.822297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.822326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.822581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.822722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.822752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.822958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.823154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.823183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.823449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.823661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.823691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.823883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.824254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.824710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.824908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.825104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.825378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.825407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.825613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.825761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.825790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.825984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.826333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.826699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.826912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.827109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.827313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.827344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.827533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.827812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.827841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.828043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.828172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.828201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.828431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.828637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.828668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.828877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.829016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.829045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.925 [2024-05-15 03:18:54.829251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.925 [2024-05-15 03:18:54.829453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.925 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.829693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.829810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.829839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.830031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.830451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.830764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.830911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.831172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.831360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.831390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.831580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.831834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.831864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.832072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.832191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.832222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.832384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.832607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.832638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.832869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.833229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.833567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.833796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.834005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.834373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.834658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.834803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.835093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.835330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.835359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.835516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.835669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.835699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.835905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.836202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.836580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.836892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.837172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.837361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.837390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.837659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.837864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.837894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.838175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.838377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.838406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.838693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.838831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.838861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.839120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.839227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.839268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.839526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.839731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.839760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.839961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.840161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.840190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.840340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.840483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.840515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.840790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.841242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.926 qpair failed and we were unable to recover it. 00:24:23.926 [2024-05-15 03:18:54.841723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.926 [2024-05-15 03:18:54.841886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.842032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.842272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.842302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.842426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.842687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.842717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.842899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.843173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.843202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.843396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.843598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.843629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.843836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.844026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.844036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.844259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.844461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.844499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.844760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.845094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.845124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.845331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.845612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.845642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.845977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.846007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.846263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.846408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.846437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.846725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.846877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.846906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.847184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.847314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.847344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.847534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.847752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.847782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.848041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.848406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.848743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.848903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.849057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.849367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.849786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.849957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.850183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.850392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.850421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.850616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.850818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.850847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.851138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.851313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.851343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.851493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.851673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.851702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.851982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.852234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.852263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.852470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.852660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.852689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.852884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.853107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.853136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.853339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.853619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.853650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.853924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.854122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.854151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.854301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.854516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.854546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.927 qpair failed and we were unable to recover it. 00:24:23.927 [2024-05-15 03:18:54.854686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.927 [2024-05-15 03:18:54.854876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.854906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.855108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.855483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.855735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.855972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.856123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.856296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.856324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.856525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.856828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.856858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.856996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.857183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.857211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.857353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.857552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.857564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.857813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.858192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.858473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.858709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.858948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.859388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.859797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.859960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.860180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.860313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.860343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.860544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.860676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.860706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.860936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.861186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.861215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.861423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.861589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.861620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.861910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.862359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.862742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.862978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.863256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.863376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.863387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.863482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.863627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.863639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.863766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.864019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.864049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.864300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.864504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.928 [2024-05-15 03:18:54.864535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.928 qpair failed and we were unable to recover it. 00:24:23.928 [2024-05-15 03:18:54.864660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.864804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.864833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.865029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.865407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.865765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.865996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.866278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.866426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.866456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.866674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.866861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.866891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.867119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.867506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.867819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.867976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.868185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.868370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.868399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.868626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.868885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.869180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.869457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.869494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.869702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.869887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.869917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.870069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.870338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.870694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.870911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.871105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.871336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.871365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.871588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.871867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.871896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.872035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.872149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.872178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.872433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.872661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.872692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.872900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.873094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.873104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.873263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.873515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.873545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.873770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.874266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.874727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.874864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.875066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.875422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.875769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.875959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.876190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.876452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.876490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.929 [2024-05-15 03:18:54.876613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.876741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.929 [2024-05-15 03:18:54.876771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.929 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.877305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.877659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.877847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.878060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.878253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.878510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.878770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.878919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.879176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.879384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.879414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.879702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.879845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.879874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.880012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.880264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.880293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.880490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.880728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.880757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.880983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.881366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.881718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.881943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.882131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.882321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.882355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.882566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.882713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.882742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.882943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.883298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.883671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.883885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.884197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.884320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.884349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.884556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.884699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.884728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.884861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.885286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.885754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.885898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.886109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.886410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.886737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.886985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.887111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.887371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.930 qpair failed and we were unable to recover it. 00:24:23.930 [2024-05-15 03:18:54.887577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.930 [2024-05-15 03:18:54.887745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.888029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.888368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.888645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.888799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.888947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.889240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.889704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.889870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.890061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.890420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.890823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.890976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.891112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.891205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.891216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.891451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.891644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.891674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.891880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.892260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.892598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.892886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.893151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.893362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.893373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.893493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.893681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.893711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.893913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.894401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.894649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.894754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.894859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.895150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.895430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.895647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.895919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.896137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.896258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.896288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.896397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.896550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.896562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.896732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.896977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.897007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.897204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.897319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.897330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.897420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.897625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.897636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.897889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.898022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.898051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.931 qpair failed and we were unable to recover it. 00:24:23.931 [2024-05-15 03:18:54.898260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.931 [2024-05-15 03:18:54.898451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.898487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.898611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.898743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.898772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.899050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.899260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.899290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.899510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.899669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.899699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.899887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.900351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.900557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.900749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.900848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.901071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.901378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.901762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.901858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.902020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.902291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.902701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.902921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.903195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.903341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.903370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.903566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.903751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.903780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.903919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.904174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.904203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.904459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.904656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.904686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.904903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.905251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.905624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.905840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.906046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.906177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.906206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.906401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.906654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.906877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.907161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.907191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.907461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.907675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.907704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.932 qpair failed and we were unable to recover it. 00:24:23.932 [2024-05-15 03:18:54.907905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.908113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.932 [2024-05-15 03:18:54.908143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.908432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.908577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.908609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.908821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.909169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.909543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.909712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.909921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.910318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.910765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.910914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.911129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.911314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.911343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.911549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.911680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.911709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.911923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.912413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.912610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.912834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.912968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.913310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.913580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.913826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.914030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.914359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.914606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.914862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.914981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.915010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.915162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.915399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.915410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.915576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.915728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.915740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.915893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.916197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.916540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.916833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.916937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.917169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.917366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.917396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.917559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.917762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.917791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.918003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.918173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.918202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.918359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.918542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.918574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.933 qpair failed and we were unable to recover it. 00:24:23.933 [2024-05-15 03:18:54.918774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.933 [2024-05-15 03:18:54.919005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.919034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.919280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.919446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.919457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.919577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.919748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.919759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.919966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.920375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.920735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.920952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.921166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.921286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.921316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.921501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.921606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.921616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.921842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.922189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.922597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.922850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.923052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.923176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.923207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.923407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.923598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.923610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.923776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.924282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.924639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.924815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.925003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.925201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.925230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.925452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.925649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.925679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.925816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.926308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.926632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.926863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.927139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.927422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.927452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.927768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.927959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.927989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.928200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.928286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.928296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.928513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.928608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.928620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.928840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.929288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.929615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.929820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.930008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.930259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.930289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.930446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.930597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.934 [2024-05-15 03:18:54.930608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.934 qpair failed and we were unable to recover it. 00:24:23.934 [2024-05-15 03:18:54.930767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.930867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.930878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.931049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.931271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.931305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.931524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.931722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.931751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.931958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.932364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.932712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.932870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.933074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.933290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.933319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.933593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.933709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.933738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.933879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.934333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.934860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.935003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.935185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.935219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.935412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.935611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.935642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.935784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.936275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.936637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.936814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.936916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.937321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.937751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.937976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.938104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.938512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.938831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.938998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.939256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.939439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.939475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.939666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.939856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.939885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.940041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.940389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.940677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.940962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.941092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.941343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.941373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.941575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.941733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.941760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.935 qpair failed and we were unable to recover it. 00:24:23.935 [2024-05-15 03:18:54.941898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.942042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.935 [2024-05-15 03:18:54.942071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.942211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.942405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.942435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.942607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.942728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.942763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.943093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.943290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.943320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.943570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.943719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.943730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.943885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.944261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.944632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.944855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.945048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.945298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.945328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.945474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.945737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.945766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.945970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.946197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.946226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.946434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.946589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.946621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.946845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.947033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.947062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.947337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.947532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.947562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.947783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.948216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.948668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.948829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.948976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.949491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.949812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.949925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.950091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.950220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.950248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.950384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.950582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.950612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.950826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.950987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.951016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.951217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.951403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.951432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.951627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.951810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.951838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.951981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.952271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.952689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.952791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.936 qpair failed and we were unable to recover it. 00:24:23.936 [2024-05-15 03:18:54.952955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.953131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.936 [2024-05-15 03:18:54.953160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.953349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.953551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.953582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.953736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.953930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.953960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.954117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.954482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.954672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.954920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.955052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.955185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.955214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.955488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.955625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.955654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.955933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.956358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.956719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.956929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.957138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.957383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.957817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.957972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.958102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.958305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.958335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.958508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.958697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.958708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.958834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.958972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.959002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.959260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.959429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.959458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.959748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.959938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.959967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.960113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.960232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.960260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.960386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.960574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.960585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.960833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.961192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.961574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.961793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.962000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.962380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.962774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.962928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.963118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.963315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.937 [2024-05-15 03:18:54.963344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.937 qpair failed and we were unable to recover it. 00:24:23.937 [2024-05-15 03:18:54.963489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.963692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.963721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.963933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.964160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.964189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.964373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.964542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.964572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.964832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.965169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.965534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.965736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.965957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.966156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.966185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.966432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.966606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.966638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.966894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.967238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.967699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.967906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.968099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.968407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.968773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.968880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.969113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.969452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.969825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.969973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.970214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.970448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.970474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.970589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.970719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.970750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.970886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.971082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.971112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.971399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.971542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.971557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.971807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.971978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.972161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.972545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.972837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.972950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.973125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.973239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.973268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.973490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.973675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.973705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.973948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.974276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.974313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.974542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.974752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.974784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.938 [2024-05-15 03:18:54.974932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.975131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.938 [2024-05-15 03:18:54.975161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.938 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.975355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.975560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.975577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.975768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.975969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.976145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.976474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.976792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.976912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.977137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.977238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.977253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.977413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.977592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.977607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.977791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.978233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.978581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.978797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.979020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.979381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.979800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.979976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.980180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.980459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.980500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.980793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.980929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.980959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.981153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.981276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.981306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.981590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.981782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.981812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.982039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.982187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.982223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.982415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.982568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.982600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.982859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.983081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.983112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.983246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.983448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.983489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.983723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.983972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.984002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.984154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.984360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.984389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.984579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.984860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.984891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.985097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.985311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.985326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.985503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.985670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.985700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.986362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.986597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.986714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.939 qpair failed and we were unable to recover it. 00:24:23.939 [2024-05-15 03:18:54.986938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.939 [2024-05-15 03:18:54.987163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.987178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.987365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.987560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.987591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.987745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.987978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.988008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.988132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.988270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.988301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.988490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.988604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.988635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.988834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.989198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.989629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.989910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.990074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.990246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.990275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.990564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.990716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.990746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.990959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.991394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.991835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.991988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.992178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.992392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.992422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.992571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.992773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.992803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.993074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.993263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.993293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.993503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.993645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.993684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.993922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.994209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.994683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.994902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.995167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.995372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.995402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.995696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.995807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.995837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.996114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.996254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.996283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.996469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.996689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.996704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.996883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.996995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.997026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.997148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.997403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.997432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.997653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.997856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.997885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.998118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.998383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.998413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.998666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.998763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.998805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.999062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.999330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.999364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.940 qpair failed and we were unable to recover it. 00:24:23.940 [2024-05-15 03:18:54.999595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.940 [2024-05-15 03:18:54.999799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:54.999829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:54.999961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.000333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.000780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.000985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.001248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.001354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.001384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.001581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.001684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.001699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.001875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.002235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.002580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.002741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.002951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.003321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.003665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.003837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.004038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.004393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.004788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.004986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.005139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.005304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.005334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.005488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.005736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.005752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.005897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.006360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.006673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.006887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.006985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.007083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.007375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.007832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.007995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.008122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.008252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.008282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.008524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.008656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.941 [2024-05-15 03:18:55.008686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.941 qpair failed and we were unable to recover it. 00:24:23.941 [2024-05-15 03:18:55.008945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.009260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.009647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.009932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.010213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.010402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.010417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.010543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.010765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.010780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.010943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.011350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.011810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.011944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.012089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.012246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.012275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.012500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.012702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.012732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.012935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.013315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.013827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.013966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.014169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.014291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.014320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.014583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.014728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.014759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.014958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.015354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.015761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.015881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.016108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.016329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.016344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.016474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.016629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.016644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.016800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.017370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.017687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.017992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.018141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.018340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.018370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.018540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.018786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.018804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.018896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.019122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.019151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.019360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.019493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.019523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.019745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.019970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.020001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.020133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.020251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.020281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.020678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.942 [2024-05-15 03:18:55.020709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.942 qpair failed and we were unable to recover it. 00:24:23.942 [2024-05-15 03:18:55.020844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.021151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.021509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.021858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.021977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.022007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.022201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.022410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.022440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.022656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.022799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.022830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.023035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.023232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.023262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.023472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.023584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.023617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.023877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.024065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.024095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.024321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.024528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.024559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.024775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.024983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.025013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.025144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.025344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.025373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.025630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.025804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.025834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.025977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.026173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.026203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.026407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.026686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.026717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.026924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.027345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.027786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.027934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.028157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.028528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.028797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.028928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.029038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.029201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.029216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.029384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.029633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.029664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.029960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.030268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.030620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.030814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.031023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.031408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.031646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.031865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.032065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.032375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.943 [2024-05-15 03:18:55.032406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.943 qpair failed and we were unable to recover it. 00:24:23.943 [2024-05-15 03:18:55.032650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.032777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.032807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.032969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.033245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.033639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.033954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.034223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.034341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.034372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.034577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.034755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.034790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.035010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.035144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.035174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.035388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.035573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.035610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.035835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.035991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.036006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.036169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.036296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.036326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.036605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.036811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.036826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.037100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.037298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.037327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.037524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.037699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.037729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.037930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.038309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.038744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.038991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.039215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.039437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.039476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.039615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.039839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.039870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.040125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.040351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.040381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.040649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.040879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.040909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.041049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.041178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.041209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.041492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.041659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.041690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.041838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.042216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.042646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.042955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.043214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.043414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.043443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.043653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.043835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.043850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.044077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.044331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.044360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.044570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.044675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.044690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.044875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.045009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.045039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.944 qpair failed and we were unable to recover it. 00:24:23.944 [2024-05-15 03:18:55.045248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.944 [2024-05-15 03:18:55.045386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.045415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.045652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.045852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.045882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.046065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.046228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.046257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.046457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.046770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.046801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.046937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.047189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.047219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.047423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.047714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.047730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.047871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.048268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.048670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.048851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.049012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.049378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.049646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.049825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.049950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.050444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.050734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.050998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.051128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.051326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.051357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.051611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.051785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.051817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.052009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.052205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.052235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.052421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.052617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.052648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.052771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.052973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.053003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.053293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.053513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.053545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.053729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.053909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.053940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.054207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.054394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.054424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.054566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.054800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.054830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.055088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.055293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.055323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.055484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.055608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.945 [2024-05-15 03:18:55.055639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.945 qpair failed and we were unable to recover it. 00:24:23.945 [2024-05-15 03:18:55.055775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.056172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.056410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.056761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.056874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.057095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.057321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.057351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.057554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.057692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.057722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.057846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.058205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.058541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.058732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.058980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.059243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.059541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.059839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.059986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.060115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.060144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.060276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.060504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.946 [2024-05-15 03:18:55.060534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:23.946 qpair failed and we were unable to recover it. 00:24:23.946 [2024-05-15 03:18:55.060822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.223 qpair failed and we were unable to recover it. 00:24:24.223 [2024-05-15 03:18:55.061241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.223 qpair failed and we were unable to recover it. 00:24:24.223 [2024-05-15 03:18:55.061576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.223 qpair failed and we were unable to recover it. 00:24:24.223 [2024-05-15 03:18:55.061847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.061965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.223 qpair failed and we were unable to recover it. 00:24:24.223 [2024-05-15 03:18:55.062187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.062359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.223 [2024-05-15 03:18:55.062374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.223 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.062484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.062643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.062658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.062834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.062995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.063106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.063530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.063795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.063981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.064207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.064371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.064386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.064511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.064687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.064702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.064882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.065269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.065662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.065860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.066136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.066416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.066445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.066678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.066866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.066896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.067065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.067387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.067426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.067742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.067971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.068003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.068303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.068505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.068538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.068742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.068874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.068904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.069107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.069317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.069346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.069572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.069758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.069772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.069937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.070250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.070652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.070848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.070953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.071274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.071570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.071797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.071958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.072283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.072521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.072791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.072987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.073089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.073276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.073287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.224 qpair failed and we were unable to recover it. 00:24:24.224 [2024-05-15 03:18:55.073442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.073548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.224 [2024-05-15 03:18:55.073560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.073639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.073813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.073824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.073918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.074120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.074383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.074658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.074757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.074910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.075229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.075539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.075747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.075973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.076060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.076275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.076556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.076819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.076975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.077214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.077522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.077794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.077953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.078098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.078366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.078570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.078730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.078893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.079209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.079391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.079578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.079701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.079873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.080136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.080463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.080801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.080886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.081045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.081232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.081244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.081354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.081550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.225 [2024-05-15 03:18:55.081562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.225 qpair failed and we were unable to recover it. 00:24:24.225 [2024-05-15 03:18:55.081662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.081757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.081768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.081873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.081977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.081988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.082096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.082315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.082675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.082901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.083065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.083401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.083646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.083868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.083982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.084145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.084522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.084792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.084882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.084979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.085246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.085256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.085510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.085642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.085656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.085818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.086239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.086561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.086824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.086941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.087185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.087345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.087356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.087533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.087684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.087695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.087808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.088205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.088473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.088806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.088981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.089146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.089337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.089348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.089504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.089689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.089700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.089945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.090247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.090574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.226 [2024-05-15 03:18:55.090797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-05-15 03:18:55.090968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.226 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.091062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.091303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.091575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.091683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.091853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.092247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.092751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.092907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.093061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.093265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.093295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.093418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.093620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.093651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.093850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.094350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.094707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.094873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.095021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.095184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.095213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.095353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.095489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.095520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.095811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.096194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.096629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.096865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.097147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.097351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.097380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.097594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.097801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.097831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.098026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.098171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.098201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.098377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.098595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.098607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.098771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.098975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.099004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.099310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.099507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.099537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.099686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.099877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.099888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.099984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.100260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.100560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.100688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.100946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.101086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.101117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.101310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.101571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.101602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.101866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.102015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.102044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.227 qpair failed and we were unable to recover it. 00:24:24.227 [2024-05-15 03:18:55.102236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-05-15 03:18:55.102416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.102446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.102628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.102783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.102814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.103017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.103190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.103220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.103382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.103581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.103613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.103806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.104202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.104620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.104902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.105104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.105507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.105794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.105975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.106126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.106295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.106324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.106543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.106764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.106794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.106979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.107087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.107110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.107295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.107449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.107491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.107704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.107999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.108028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.108224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.108414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.108443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.108985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.109015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.109215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.109417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.109447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.109713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.109920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.109950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.110151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.110335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.110346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.110562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.110664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.110675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.110914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.110995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.111006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.111119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.111333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.111344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.228 qpair failed and we were unable to recover it. 00:24:24.228 [2024-05-15 03:18:55.111423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.228 [2024-05-15 03:18:55.111586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.111597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.111867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.111981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.112011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.112277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.112423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.112453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.112726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.112926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.112956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.113081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.113284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.113314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.113504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.113757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.113786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.113916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.114188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.114694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.114916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.115152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.115258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.115287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.115517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.115663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.115693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.115821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.115975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.116002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.116270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.116424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.116453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.116612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.116749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.116779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.116930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.117291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.117750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.117937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.118074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.118261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.118290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.118423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.118654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.118684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.118907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.119286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.119721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.119949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.120219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.120421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.120452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.120595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.120791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.120820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.121030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.121392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.121750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.121933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.122128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.122377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.122407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.122606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.122794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.122823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.123105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.123220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-05-15 03:18:55.123251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.229 qpair failed and we were unable to recover it. 00:24:24.229 [2024-05-15 03:18:55.123440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.123672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.123703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.123885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.123976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.123987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.124090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.124247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.124258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.124374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.124655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.124686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.124880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.125257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.125621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.125869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.126080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.126342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.126373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.126514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.126704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.126734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.126927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.127440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.127722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.127902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.128040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.128222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.128252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.128456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.128719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.128749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.128946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.129487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.129763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.129936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.130171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.130354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.130364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.130527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.130680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.130709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.130914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.131142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.131171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.131372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.131497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.131527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.131801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.132078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.132107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.132365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.132505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.132536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.132756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.133266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.133583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.133741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.133844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.134036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.134065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.134450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.134490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.135048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.135077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.230 [2024-05-15 03:18:55.135283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.135426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-05-15 03:18:55.135455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.230 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.135600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.135765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.135794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.135934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.136189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.136558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.136788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.137044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.137310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.137339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.137541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.137740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.137769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.138000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.138305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.138334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.138493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.138627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.138656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.138894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.139168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.139527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.139754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.139952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.140332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.140702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.140935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.141031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.141285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.141558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.141749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.141988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.142019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.142171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.142367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.142396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.142546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.142728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.142758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.142887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.143146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.143175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.143388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.143611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.143641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.143844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.144157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.144382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.144693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.144865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.145050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.145344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.145783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.145946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.231 qpair failed and we were unable to recover it. 00:24:24.231 [2024-05-15 03:18:55.146125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.231 [2024-05-15 03:18:55.146387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.146416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.146614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.146742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.146771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.146972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.147244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.147273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.147426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.147634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.147669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.147929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.148250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.148686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.148843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.149109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.149388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.149417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.149700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.149909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.149921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.150071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.150192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.150221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.150417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.150557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.150587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.150796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.150992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.151022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.151229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.151332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.151342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.151502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.151659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.151693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.151902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.152155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.152185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.152342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.152532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.152562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.152818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.153012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.153042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.153236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.153489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.153519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.153725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.153979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.154008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.154198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.154340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.154368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.154572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.154775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.154786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.154945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.155320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.155687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.155921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.156057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.156152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.156163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.232 [2024-05-15 03:18:55.156280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.156430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.232 [2024-05-15 03:18:55.156458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.232 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.156571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.156778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.156808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.156931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.157161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.157485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.157783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.157954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.158145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.158286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.158314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.158523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.158662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.158691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.158933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.159145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.159155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.159303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.159530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.159561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.159815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.160310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.160734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.160901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.161010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.161192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.161221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.161415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.161565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.161596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.161880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.162380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.162854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.162993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.163245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.163482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.163513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.163719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.163838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.163866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.164006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.164351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.164717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.164892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.165121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.165308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.165338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.165607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.165740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.165769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.166028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.166277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.166306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.166484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.166699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.166727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.167003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.167315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.167345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.233 qpair failed and we were unable to recover it. 00:24:24.233 [2024-05-15 03:18:55.167555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-05-15 03:18:55.167828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.167857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.167972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.168138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.168148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.168264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.168483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.168514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.168774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.168977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.169209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.169420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.169699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.169811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.169974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.170357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.170667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.170837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.171032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.171159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.171189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.171403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.171597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.171628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.171769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.172184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.172527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.172757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.172887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.173278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.173667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.173894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.174125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.174313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.174342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.174565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.174751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.174780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.174952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.175121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.175151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.175381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.175575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.175605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.175760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.175978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.176007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.176236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.176424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.176453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.176680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.176934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.176963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.177185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.177393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.177422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.177554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.177807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.177836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.178058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.178250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.178279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.178511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.178642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.178672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.178813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.179028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-05-15 03:18:55.179058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.234 qpair failed and we were unable to recover it. 00:24:24.234 [2024-05-15 03:18:55.179178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.179295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.179324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.179611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.179741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.179770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.180006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.180339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.180747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.180913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.181201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.181333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.181363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.181557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.181760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.181790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.182006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.182252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.182746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.182899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.183111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.183377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.183406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.183631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.183768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.183797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.184006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.184591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.184920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.185185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.185322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.185352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.185556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.185769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.185799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.186007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.186283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.186313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.186439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.186666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.186697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.186951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.187284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.187651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.187957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.188165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.188393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.188422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.188626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.188779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.188809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.189026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.189302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.189332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.189540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.189696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.189726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.189942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.190217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.190247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1172720 Killed "${NVMF_APP[@]}" "$@" 00:24:24.235 [2024-05-15 03:18:55.190452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.190666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.190698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 [2024-05-15 03:18:55.190899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.191099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.235 [2024-05-15 03:18:55.191110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.235 qpair failed and we were unable to recover it. 00:24:24.235 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:24:24.235 [2024-05-15 03:18:55.191206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.191307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.191318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.191473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:24.236 [2024-05-15 03:18:55.191704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.191715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.236 [2024-05-15 03:18:55.191957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:24.236 [2024-05-15 03:18:55.192123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.192297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.236 [2024-05-15 03:18:55.192515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.192608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.192821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.192936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.193064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.193262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.193680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.193875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.193967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.194118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.194324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.194510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.194776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.194872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.195058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.195336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.195594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.195802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.195977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.196122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.196395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.196671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.196770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.196986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.197222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.197473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.197727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.197884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 [2024-05-15 03:18:55.198149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1173445 00:24:24.236 [2024-05-15 03:18:55.198485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.236 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1173445 00:24:24.236 [2024-05-15 03:18:55.198742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.236 [2024-05-15 03:18:55.198844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.236 qpair failed and we were unable to recover it. 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:24.237 [2024-05-15 03:18:55.198995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1173445 ']' 00:24:24.237 [2024-05-15 03:18:55.199152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.199257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.237 [2024-05-15 03:18:55.199397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.199517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:24.237 [2024-05-15 03:18:55.199718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.199837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.237 [2024-05-15 03:18:55.199959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:24.237 [2024-05-15 03:18:55.200268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 03:18:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.237 [2024-05-15 03:18:55.200540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.200711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.200813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.201213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.201430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.201751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.201924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.202026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.202221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.202429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.202710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.202883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.203123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.203315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.203549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.203737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.203841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.204013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.204287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.204528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.237 [2024-05-15 03:18:55.204643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.237 qpair failed and we were unable to recover it. 00:24:24.237 [2024-05-15 03:18:55.204833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.204945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.204956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.205116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.205326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.205530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.205721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.205835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.205931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.206182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.206370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.206657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.206761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.206880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.207295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.207589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.207845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.207990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.208089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.208376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.208722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.208821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.208914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.209266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.209533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.209742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.209918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.210011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.210235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.210454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.210659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.210760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.210920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.211192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.211558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.211769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.211948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.212100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.212198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.212209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.238 [2024-05-15 03:18:55.212375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.212462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.238 [2024-05-15 03:18:55.212478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.238 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.212578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.212755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.212766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.212847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.213193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.213478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.213807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.213923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.214085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.214392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.214607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.214699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.214853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.215303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.215623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.215774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.215881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.216201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.216526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.216777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.216937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.217278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.217688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.217960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.218058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.218389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.218616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.218872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.218977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.219137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.219373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.219384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.219539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.219700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.219711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.219868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.220206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.220385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.220647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.220753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.220906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.221063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.221076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.221224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.221317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.221328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.239 qpair failed and we were unable to recover it. 00:24:24.239 [2024-05-15 03:18:55.221498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.239 [2024-05-15 03:18:55.221645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.221656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.221801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.221887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.221898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.222039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.222233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.222520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.222830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.222989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.223136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.223341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.223606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.223705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.223868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.224198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.224488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.224767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.224863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.225024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.225207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.225606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.225854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.225947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.226051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.226332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.226662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.226773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.226942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.227271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.227476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.227756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.227861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.228099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.228439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.228665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.228820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.229101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.229321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.240 [2024-05-15 03:18:55.229589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.240 [2024-05-15 03:18:55.229753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.240 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.229848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.230120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.230483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.230771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.230936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.231032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.231308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.231649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.231842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.232002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.232197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.232387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.232646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.232823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.232936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.233094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.233420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.233687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.233771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.233855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.234186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.234471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.234701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.234862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.234964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.235269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.235449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.241 qpair failed and we were unable to recover it. 00:24:24.241 [2024-05-15 03:18:55.235767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.241 [2024-05-15 03:18:55.235877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.236010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.236259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.236566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.236760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.236894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.236990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.237182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.237405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.237742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.237916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.238024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.238295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.238487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.238723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.238901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.239048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.239285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.239296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.239454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.239610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.239622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.239789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.240134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.240385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.240696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.240795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.241019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.241284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.241663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.241806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.241955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.242283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.242482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.242809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.242992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.243139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.243249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.243260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.243521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.243644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.243662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ff4000b90 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.243880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.244078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.244096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.242 qpair failed and we were unable to recover it. 00:24:24.242 [2024-05-15 03:18:55.244222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.242 [2024-05-15 03:18:55.244398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.244415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.244583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.244669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.244681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.244785] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:24.243 [2024-05-15 03:18:55.244826] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.243 [2024-05-15 03:18:55.244837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.244931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.244941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.245023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.245306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.245503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.245783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.245953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.246046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.246300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.246504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.246761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.246916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.247092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.247352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.247697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.247858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.248032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.248365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.248566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.248798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.248968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.249068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.249314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.249585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.249762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.249911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.250241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.250597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.250807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.250982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.251071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.251409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.251732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.243 [2024-05-15 03:18:55.251826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.243 qpair failed and we were unable to recover it. 00:24:24.243 [2024-05-15 03:18:55.251896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.251979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.251990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.252138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.252401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.252675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.252868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.252932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.253198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.253390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.253668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.253897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.253989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.254077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.254405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.254670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.254795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.254934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.255146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.255486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.255691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.255867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.255950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.256135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.256462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.256718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.256829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.256928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.257239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.257443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.257775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.257955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.258117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.258335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.258723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.258815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.258909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.259185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.244 qpair failed and we were unable to recover it. 00:24:24.244 [2024-05-15 03:18:55.259386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.244 [2024-05-15 03:18:55.259484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.259703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.259792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.259802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.259908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.260238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.260470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.260665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.260891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.260970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.261162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.261495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.261850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.261951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.262118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.262383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.262730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.262845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.262996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.263230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.263441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.263750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.263908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.264008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.264269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.264531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.264757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.264918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.265179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.265598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.265760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.265929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.266223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.266573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.266843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.266996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.267153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.267438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.267656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.267734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.245 qpair failed and we were unable to recover it. 00:24:24.245 [2024-05-15 03:18:55.267835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.245 [2024-05-15 03:18:55.268019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.268276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.268561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.268758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.268928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.269095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.269421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.269696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.269857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.269953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.270376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.270703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.270928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.271036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.271383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.271607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.271850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.272003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.272273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.272534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.246 [2024-05-15 03:18:55.272804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.272901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.273050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.273307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.273619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.273805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.273992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.274088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.274282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.274577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.274677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.246 [2024-05-15 03:18:55.274905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.275067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.246 [2024-05-15 03:18:55.275078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.246 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.275162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.275435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.275768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.275874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.276004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.276297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.276571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.276824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.276933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.277034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.277235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.277438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.277623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.277848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.277938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.278276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.278606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.278704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.278887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.279227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.279596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.279679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.279779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.280162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.280590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.280890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.280992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.281104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.281287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.281561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.281669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.281915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.282190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.282530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.282762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.282875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.283049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.283060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.283204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.283307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.283318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.247 qpair failed and we were unable to recover it. 00:24:24.247 [2024-05-15 03:18:55.283415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.247 [2024-05-15 03:18:55.283501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.283513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.283630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.283785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.283796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.283944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.284220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.284502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.284613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.284826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.285311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.285609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.285716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.285924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.286129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.286506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.286644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.286812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.287210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.287610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.287785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.287936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.288196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.288485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.288789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.288886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.288988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.289267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.289502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.289797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.289993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.290117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.290312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.290632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.290859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.290982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.291273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.291540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.291655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.291815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.292056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.292067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.248 qpair failed and we were unable to recover it. 00:24:24.248 [2024-05-15 03:18:55.292159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.248 [2024-05-15 03:18:55.292323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.292334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.292576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.292667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.292677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.292794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.292884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.292895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.293046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.293233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.293631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.293790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.293953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.294217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.294409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.294680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.294838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.294934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.295299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.295744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.295848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.296001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.296204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.296668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.296786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.296970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.297260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.297483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.297743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.297839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.297938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.298280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.298551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.298802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.298901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.298995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.299271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.299456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.299702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.299853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.300111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.300300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.249 qpair failed and we were unable to recover it. 00:24:24.249 [2024-05-15 03:18:55.300586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.249 [2024-05-15 03:18:55.300739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.300750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.300894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.300988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.300998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.301162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.301434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.301660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.301905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.301993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.302004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.302163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.302259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.302270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.302383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.302673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.302684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.302917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.303108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.303545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.303833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.303935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.304088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.304414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.304747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.304856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.304932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.305341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.305685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.305876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.306028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.306312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.306653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.306879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.307031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.307331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.307526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.307722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.307873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.308048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.308059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.250 qpair failed and we were unable to recover it. 00:24:24.250 [2024-05-15 03:18:55.308157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.250 [2024-05-15 03:18:55.308252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.308431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.308620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.308812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.308976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.309069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.309322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.309524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.309709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.309831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.309925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.310242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.310550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.310723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.310877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.311162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.311496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.311699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.311803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.311910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.312217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.312473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.312674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.312848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.313013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.313194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.313362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.313671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.313899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.314087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.314362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.314646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.314755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.314981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.315275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.315688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.315860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.251 qpair failed and we were unable to recover it. 00:24:24.251 [2024-05-15 03:18:55.315940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.251 [2024-05-15 03:18:55.315946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.251 [2024-05-15 03:18:55.316035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.316213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.316451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.316801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.316913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.317081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.317411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.317782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.317897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.318119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.318381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.318720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.318908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.319023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.319343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.319784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.319972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.320079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.320280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.320474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.320696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.320811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.320934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.321182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.321410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.321632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.321907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.321988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.322174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.322544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.322748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.322874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.323109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.323287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.323299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.323515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.323755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.323767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.323878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.324184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.252 [2024-05-15 03:18:55.324601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.252 [2024-05-15 03:18:55.324745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.252 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.324877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.325288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.325618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.325809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.325910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.326241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.326570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.326821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.326995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.327384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.327815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.327975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.328229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.328452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.328894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.328976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.329233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.329458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.329881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.329972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.330078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.330318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.330594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.330684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.330841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.331263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.331600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.331802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.331994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.332200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.332541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.332786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.332954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.333191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.333203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.333423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.333593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.333606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.333791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.333989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.253 [2024-05-15 03:18:55.334000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.253 qpair failed and we were unable to recover it. 00:24:24.253 [2024-05-15 03:18:55.334110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.334276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.334287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.334518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.334696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.334707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.334922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.335268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.335621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.335790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.335968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.336411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.336678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.336833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.336920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.337210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.337437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.337597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.337772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.338236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.338557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.338746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.338911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.339142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.339153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.339441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.339598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.339610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.339845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.340243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.340732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.340985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.341229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.341394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.341406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.341558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.341771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.341782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.341946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.342189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.342200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.342424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.342710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.342722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.342955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.343369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.343770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.343993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.344228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.344393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.344405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.344553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.344704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.344716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ffc000b90 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.344924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.345114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.345131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.254 qpair failed and we were unable to recover it. 00:24:24.254 [2024-05-15 03:18:55.345246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.254 [2024-05-15 03:18:55.345471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.345487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.345666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.345908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.345922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.346146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.346441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.346456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.346600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.346772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.346787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.347009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.347231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.347245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.347492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.347765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.347781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.348009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.348253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.348579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.348851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.349092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.349317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.349332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.349518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.349750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.349764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.349966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.350383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.350648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.350830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.350999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.351401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.351737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.351947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.352113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.352408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.352782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.352993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.353193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.353357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.353375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.353644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.353873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.353890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.354139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.354371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.354388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.354640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.354912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.354929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.255 qpair failed and we were unable to recover it. 00:24:24.255 [2024-05-15 03:18:55.355092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.255 [2024-05-15 03:18:55.355290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.355305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.355560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.355732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.355748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.355992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.356257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.356274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.356387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.356627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.356644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.356838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.357238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.357713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.357934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.358153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.358314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.358329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.358589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.358750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.358765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.358956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.359231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.359248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.359499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.359627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.359643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.359876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.360146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.360163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.360406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.360641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.360657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.360833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.361280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.361712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.361975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.362179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.362358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.362373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.362618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.362872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.362887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.363139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.363391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.363406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.363653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.363830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.363845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.364006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.364300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.364315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.364508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.364792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.364807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.364999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.365091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.365106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.365272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.365513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.365529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.365759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.366029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.366045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.366264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.366455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.256 [2024-05-15 03:18:55.366474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.256 qpair failed and we were unable to recover it. 00:24:24.256 [2024-05-15 03:18:55.366640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.531 [2024-05-15 03:18:55.366862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.366878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.367128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.367295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.367310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.367561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.367735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.367749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.367919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.368091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.368106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.368379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.368625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.368641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.368811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.368994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.369008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.369263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.369511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.369527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.369728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.369969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.369984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.370210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.370369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.370384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.370488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.370709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.370724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.370920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.371148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.371165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.371326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.371597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.371612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.371793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.372184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.372599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.372777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.373021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.373198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.373213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.373491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.373707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.373722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.373910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.374143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.374158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.374407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.374637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.374652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.374904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.375334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.375633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.375921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.376096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.376213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.376228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.376501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.376658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.376673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.376899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.377299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.377811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.377993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.378240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.378412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.378427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.378590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.378840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.378855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.532 qpair failed and we were unable to recover it. 00:24:24.532 [2024-05-15 03:18:55.379018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.379132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.532 [2024-05-15 03:18:55.379147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.379394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.379660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.379676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.379909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.380319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.380729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.380979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.381226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.381453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.381481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.381602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.381841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.381856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.382101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.382328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.382343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.382599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.382763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.382778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.382938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.383352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.383641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.383862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.384057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.384273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.384764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.384888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.385067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.385310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.385325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.385573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.385735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.385750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.385933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.386169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.386183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.386369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.386545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.386560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.386741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.387300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.387721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.387853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.388024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.388371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.388792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.388963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.389115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.389347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.389362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.389592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.389879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.389896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.390019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.390116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.390131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.390374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.390621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.390636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.390895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.391136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.533 [2024-05-15 03:18:55.391151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.533 qpair failed and we were unable to recover it. 00:24:24.533 [2024-05-15 03:18:55.391324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.391545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.391562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.391720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.391970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.391986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.392188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.392303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.392321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.392513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.392710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.392726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.392973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.393361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.393725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.393974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.394132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.394224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.394239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.394481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.394680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.394695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.394939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.395188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.395202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.395401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.395623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.395638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.395809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.396175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.396626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.396815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.396915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.397096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.397112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.397377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.397627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.397643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.397734] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.534 [2024-05-15 03:18:55.397760] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.534 [2024-05-15 03:18:55.397767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.534 [2024-05-15 03:18:55.397774] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.534 [2024-05-15 03:18:55.397780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.534 [2024-05-15 03:18:55.397815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.397834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:24.534 [2024-05-15 03:18:55.397941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:24.534 [2024-05-15 03:18:55.398036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.398051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 [2024-05-15 03:18:55.398045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.398047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:24.534 [2024-05-15 03:18:55.398218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.398331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.398344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.398580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.398848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.398863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.399104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.399343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.399358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.399528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.399684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.399701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.400283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.400298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.400483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.400706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.400721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.400980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.401172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.401187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.401310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.401557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.401573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.401819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.402010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.402026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.402196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.402492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.402507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.534 qpair failed and we were unable to recover it. 00:24:24.534 [2024-05-15 03:18:55.402757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.534 [2024-05-15 03:18:55.403004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.403019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.403159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.403406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.403420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.403606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.403797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.403813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.403989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.404366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.404667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.404927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.405123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.405346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.405361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.405472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.405691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.405706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.405926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.406286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.406695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.406931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.407105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.407278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.407293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.407488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.407676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.407692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.407923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.408208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.408671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.408870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.409125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.409316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.409331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.409555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.409746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.409762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.409862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.410096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.410111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.410373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.410624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.410639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.410874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.411149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.411164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.411326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.411581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.411597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.411869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.412087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.412102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.412341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.412507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.412525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.412746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.413250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.413704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.413941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.414177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.414397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.414412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.414600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.414849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.414864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.415085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.415257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.415271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.535 qpair failed and we were unable to recover it. 00:24:24.535 [2024-05-15 03:18:55.415479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.535 [2024-05-15 03:18:55.415725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.415739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.415909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.416137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.416152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.416357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.416558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.416576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.416861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.417305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.417618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.417820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.418030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.418242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.418258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.418454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.418641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.418659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.418757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.419277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.419707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.419893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.420092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.420254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.420271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.420526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.420722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.420737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.420989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.421371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.421767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.421942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.422123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.422322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.422336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.422508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.422733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.422748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.422909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.423141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.423156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.423399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.423639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.423655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.423927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.424284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.424675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.424860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.536 qpair failed and we were unable to recover it. 00:24:24.536 [2024-05-15 03:18:55.425051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.536 [2024-05-15 03:18:55.425281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.425296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.425461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.425702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.425718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.425884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.426238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.426699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.426889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.427060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.427300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.427316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.427500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.427618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.427633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.427821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.428238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.428636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.428774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.428912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.429312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.429752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.429944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.430127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.430351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.430366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.430539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.430655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.430670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.430844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.431220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.431600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.431846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.432017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.432245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.432261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.432490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.432653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.432669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.432846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.433217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.433652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.433829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.433988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.434247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.434261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.434449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.434645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.434661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.434847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.435199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.435601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.435814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.436006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.436161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.436177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.537 [2024-05-15 03:18:55.436287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.436404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.537 [2024-05-15 03:18:55.436418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.537 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.436596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.436696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.436711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.436882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.437140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.437165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.437428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.437676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.437691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.437890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.438138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.438510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.438709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.438876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.439337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.439710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.439970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.440089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.440266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.440282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.440528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.440807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.440822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.441140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.441382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.441401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.441570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.441748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.441763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.441962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.442354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.442748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.442864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.443085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.443462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.443749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.443864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.444057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.444276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.444292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.444566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.444757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.444773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.444897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.445275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.445683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.445817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.445930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.446454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.446757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.446947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.447130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.447310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.447325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.447576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.447811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.447826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.538 qpair failed and we were unable to recover it. 00:24:24.538 [2024-05-15 03:18:55.448050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.538 [2024-05-15 03:18:55.448223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.448238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.448459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.448655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.448670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.448840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.448949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.448964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.449233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.449434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.449449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.449584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.449715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.449730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.449918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.450152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.450525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.450765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.451011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.451415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.451796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.451978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.452096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.452320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.452334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.452504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.452744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.452759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.452890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.453328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.453697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.453823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.453967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.454261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.454557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.454771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.454956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.455150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.455270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.455285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.455405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.455593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.455609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.455770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.456241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.456601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.456786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.457047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.457516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.457862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.457971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.458086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.458259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.458274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.458481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.458706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.539 [2024-05-15 03:18:55.458721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.539 qpair failed and we were unable to recover it. 00:24:24.539 [2024-05-15 03:18:55.458902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.459319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.459647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.459800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.459985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.460274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.460690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.460797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.460909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.461333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.461684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.461815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.461926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.462315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.462745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.462859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.462962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.463450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.463824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.463935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.464143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.464370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.464385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.464546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.464801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.464816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.464981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.465425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.465786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.465970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.466088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.466435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.466733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.466964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.467214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.467413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.467428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.467652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.467847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.467862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.540 qpair failed and we were unable to recover it. 00:24:24.540 [2024-05-15 03:18:55.467982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.468093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.540 [2024-05-15 03:18:55.468107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.468348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.468516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.468531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.468716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.468953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.468968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.469223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.469480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.469496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.469692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.469860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.469876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.470129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.470314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.470328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.470517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.470696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.470710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.470933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.471364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.471660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.471855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.472018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.472257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.472696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.472945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.473212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.473404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.473419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.473631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.473833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.473848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.473968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.474076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.474091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.474302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.474622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.474637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.474829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.475295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.475814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.475937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.476177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.476530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.476784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.476964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.477141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.477315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.477330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.477516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.477717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.477731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.477893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.478323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.478679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.478798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.478966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.479230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.479245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.541 qpair failed and we were unable to recover it. 00:24:24.541 [2024-05-15 03:18:55.479358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.541 [2024-05-15 03:18:55.479558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.479572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.479796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.479903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.479918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.480019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.480509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.480855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.480998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.481271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.481549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.481563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.481724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.481847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.481862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.481960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.482173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.482555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.482766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.482927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.483368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.483603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.483809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.484007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.484341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.484650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.484835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.485095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.485477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.485705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.485791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.485910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.486532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.486761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.486992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.487197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.487434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.487695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.487812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.487922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.488134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.488349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.488682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.542 [2024-05-15 03:18:55.488869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.542 qpair failed and we were unable to recover it. 00:24:24.542 [2024-05-15 03:18:55.488972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.489315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.489561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.489736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.489850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.490189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.490478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.490670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.490786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.490897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.491170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.491454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.491677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.491792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.491896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.492127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.492456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.492795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.492903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.492995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.493267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.493535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.493731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.493838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.494003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.494195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.494592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.494826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.494941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.495045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.495292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.495617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.495809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.495919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.496129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.496412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.543 qpair failed and we were unable to recover it. 00:24:24.543 [2024-05-15 03:18:55.496790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.543 [2024-05-15 03:18:55.496917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.497074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.497356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.497640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.497759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.497917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.498210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.498425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.498720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.498899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.499000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.499214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.499475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.499699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.499886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.499988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.500083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.500342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.500573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.500814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.500913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.501091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.501282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.501537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.501796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.501900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.502047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.502385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.502712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.502818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.502931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.503132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.503337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.503533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.503805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.503927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.504189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.504204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.544 qpair failed and we were unable to recover it. 00:24:24.544 [2024-05-15 03:18:55.504323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.544 [2024-05-15 03:18:55.504501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.504516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.504631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.504741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.504756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.504845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.504945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.504960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.505128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.505526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.505821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.505931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.506041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.506377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.506627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.506868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.506970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.507157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.507380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.507395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.507632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.507810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.507825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.507999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.508215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.508677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.508807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.508965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.509217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.509594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.509788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.509944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.510493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.510770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.510960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.511140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.511505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.511810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.511935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.545 qpair failed and we were unable to recover it. 00:24:24.545 [2024-05-15 03:18:55.512177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.545 [2024-05-15 03:18:55.512342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.512356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.512527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.512724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.512739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.512895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.513252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.513520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.513835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.513962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.514047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.514475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.514763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.514942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.515140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.515367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.515382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.515613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.515785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.515800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.515927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.516208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.516570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.516881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.516987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.517171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.517452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.517770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.517920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.518082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.518376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.518391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.518579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.518779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.518794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.518970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.519302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.519316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.519567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.519688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.519703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.519950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.520230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.520245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.520478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.520732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.520747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.520890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.521253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.521585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.521728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.521868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.522321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.522640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.546 [2024-05-15 03:18:55.522826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.546 qpair failed and we were unable to recover it. 00:24:24.546 [2024-05-15 03:18:55.522920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.523343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.523669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.523857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.523980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.524133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.524148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.524373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.524636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.524653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.524831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.525124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.525535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.525666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.525903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.526334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.526597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.526772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.527011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.527400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.527690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.527821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.527942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.528303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.528708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.528900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.529106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.529291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.529306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.529512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.529649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.529664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.529810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.529986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.530001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.530213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.530437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.530453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.530582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.530770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.530785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.530971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.531280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.531683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.531865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.531986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.532416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.532713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.532915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.533088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.533249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.533264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.533433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.533599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.547 [2024-05-15 03:18:55.533614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.547 qpair failed and we were unable to recover it. 00:24:24.547 [2024-05-15 03:18:55.533746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.533903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.533918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.534185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.534411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.534426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.534651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.534826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.534840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.534963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.535444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.535803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.535918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.536092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.536254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.536269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.536458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.536623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.536638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.536797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.537038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.537053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.537323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.537562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.537578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.537827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.538226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.538620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.538871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.538980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.539251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.539267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.539440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.539605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.539621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.539820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.540283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.540505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.540640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.540884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.541316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.541771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.541943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.542164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.542434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.542450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.542691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.542865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.542880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.543022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.543317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.543331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.543558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.543684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.543699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.543832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.543990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.544006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.544117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.544287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.544302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.544546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.544733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.544748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.544910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.545077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.545092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.548 [2024-05-15 03:18:55.545313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.545559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.548 [2024-05-15 03:18:55.545575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.548 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.545748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.545944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.545959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.546159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.546326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.546342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.546532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.546732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.546747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.546918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.547137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.547435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.547762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.547883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.548055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.548385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.548676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.548865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.548982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.549259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.549636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.549823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.549981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.550266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.550281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.550478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.550704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.550719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.550949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.551352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.551617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.551760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.551964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.552338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.552737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.552931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.553026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.553491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.553790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.553971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.549 qpair failed and we were unable to recover it. 00:24:24.549 [2024-05-15 03:18:55.554201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.554421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.549 [2024-05-15 03:18:55.554436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.554641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.554767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.554782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.554981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.555233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.555651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.555773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.555941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.556316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.556708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.556850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.557008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.557406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.557708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.557840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.558009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.558239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.558254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.558447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.558631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.558647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.558773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.559276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.559718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.559835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.559944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.560259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.560730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.560841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.561017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.561296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.561623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.561823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.561993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.562288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.562704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.562901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.563062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.563295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.563310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.563535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.563698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.563714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.563914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.564353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.564727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.564975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.550 qpair failed and we were unable to recover it. 00:24:24.550 [2024-05-15 03:18:55.565090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.550 [2024-05-15 03:18:55.565373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.565389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.565614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.565793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.565807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.565993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.566205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.566220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.566471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.566647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.566662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.566865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.567381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.567745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.567994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.568203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.568361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.568376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.568656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.568895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.568909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.569026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.569233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.569247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.569478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.569732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.569747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.569925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.570198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.570537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.570809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.570985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.571183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.571453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.571473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.571578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.571752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.571767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.571987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.572461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.572689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.572950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.573133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.573302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.573317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.573490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.573647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.573662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.573790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.574163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.574585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.574844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.575011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.575382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.575638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.575755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.575910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.576076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.576091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.576258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.576481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.576497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.551 qpair failed and we were unable to recover it. 00:24:24.551 [2024-05-15 03:18:55.576664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.551 [2024-05-15 03:18:55.576878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.576893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.577049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.577419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.577806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.577995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.578276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.578479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.578494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.578613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.578742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.578757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.578982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.579200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.579215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.579385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.579602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.579618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.579796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.580256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.580664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.580869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.581055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.581327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.581795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.581968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.582175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.582364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.582379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.582552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.582722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.582740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.582976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.583347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.583716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.583962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.584074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.584319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.584334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.584624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.584824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.584839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.584956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.585261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.585710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.585841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.585969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.586370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.586665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.586835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.587010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.587381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.587679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.587940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.552 qpair failed and we were unable to recover it. 00:24:24.552 [2024-05-15 03:18:55.588045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.552 [2024-05-15 03:18:55.588327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.588341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.588575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.588825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.588840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.589013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.589397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.589758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.589951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.590061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.590269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.590284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.590514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.590684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.590700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.590879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.591337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.591675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.591877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.592040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.592266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.592280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.592395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.592613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.592628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.592905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.593281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.593672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.593798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.594046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.594161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.594176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.594348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.594619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.594635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.594836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.595244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.595675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.595802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.596049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.596384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.596752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.596891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.597110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.597334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.597350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.597616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.597742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.597756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.597898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.598142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.598502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.598738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.598976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.599169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.599328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.599343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.553 qpair failed and we were unable to recover it. 00:24:24.553 [2024-05-15 03:18:55.599536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.553 [2024-05-15 03:18:55.599739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.599753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.599881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.600251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.600665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.600782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.600957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.601326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.601618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.601801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.601972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.602347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.602715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.602844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.602966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.603135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.603151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.603370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.603581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.603609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.603865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.604235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.604484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.604660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.604850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.605221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.605642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.605836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.605955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.606146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.606161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.606405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.606586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.606601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.606850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.607303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.607731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.607855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.608079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.608236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.608251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.608421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.608583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.608598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.554 qpair failed and we were unable to recover it. 00:24:24.554 [2024-05-15 03:18:55.608729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.554 [2024-05-15 03:18:55.608913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.608928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.609040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.609138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.609154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.609416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.609636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.609652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.609835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.610208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.610534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.610740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.610849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.611135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.611605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.611742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.611937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.612350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.612716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.612933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.613110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.613356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.613371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.613571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.613728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.613743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.613873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.614244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.614655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.614839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.615015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.615404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.615711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.615949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.616189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.616287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.616302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.616397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.616581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.616597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.616830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.616997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.617015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.617261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.617428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.617444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.617650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.617835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.617850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.617977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.618374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.618758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.618887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.619006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.619120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.619134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.619239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.619417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.555 [2024-05-15 03:18:55.619434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.555 qpair failed and we were unable to recover it. 00:24:24.555 [2024-05-15 03:18:55.619555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.619681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.619697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.619825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.619988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.620179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.620407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.620785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.620903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.621005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.621205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.621220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.621419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.621677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.621692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.621982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.622308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.622598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.622728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.622898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.623261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.623618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.623790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.623992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.624238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.624253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.624485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.624672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.624688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.624932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.625319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.625616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.625741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.625964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.626217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.626232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.626501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.626741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.626755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.626925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.627270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.627623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.627898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.628148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.628339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.628354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.628482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.628589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.628604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.628767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.628989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.629005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.629368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.629382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.629633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.629833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.629849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.630014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.630199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.630213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.630379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.630511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.556 [2024-05-15 03:18:55.630527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.556 qpair failed and we were unable to recover it. 00:24:24.556 [2024-05-15 03:18:55.630769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.630886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.630902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.631075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.631314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.631330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.631527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.631688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.631703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.631887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.632138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.632532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.632731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.632981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.633108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.633123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.633365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.633602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.633618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.633795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.634041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.634057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.634324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.634557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.634572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.634821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.634993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.635008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.635232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.635348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.635364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.635538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.635708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.635723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.635912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.636258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.636674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.636857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.637047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.637238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.637252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.637436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.637554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.637570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.637820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.637989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.638198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.638430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.638797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.638933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.639202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.639450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.639468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.639623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.639815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.639830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.640086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.640286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.640301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.640499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.640672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.640686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.640931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.641304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.641653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.641837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.642000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.642215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.642229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.642394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.642569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.557 [2024-05-15 03:18:55.642585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.557 qpair failed and we were unable to recover it. 00:24:24.557 [2024-05-15 03:18:55.642833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.643141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.643547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.643742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.643932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.644375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.644594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.644856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.644969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.645328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.645804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.645994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.646218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.646449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.646469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.646694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.646862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.646878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.647073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.647258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.647273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.647535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.647703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.647718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.647820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.647997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.648011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.648121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.648349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.648363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.648536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.648801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.648817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.649043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.649225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.649240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.649415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.649665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.649681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.649878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.650282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.650655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.650871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.651053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.651239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.651254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.651503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.651671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.651687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.651853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.652022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.652038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.652144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.652252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.652267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.558 qpair failed and we were unable to recover it. 00:24:24.558 [2024-05-15 03:18:55.652432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.558 [2024-05-15 03:18:55.652539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.652555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.652717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.652906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.652922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.653074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.653184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.653199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.653368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.653609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.653624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.653789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.654281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.654504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.654778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.654981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.655266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.655458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.655482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.655610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.655778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.655793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.656017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.656352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.656708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.656844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.657030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.657218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.657232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.657390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.657581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.657596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.657703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.657993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.658008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.658317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.658513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.658529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.658807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.658928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.658942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.659151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.659269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.659285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.659456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.659660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.659675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.659920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.660286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.660710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.660947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.661111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.661213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.661229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.661424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.661542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.661558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.661814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.662251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.662753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.662940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.663081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.663320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.663335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.663526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.663698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.663713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.663833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.663996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.664011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.664200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.664373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.664388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.559 qpair failed and we were unable to recover it. 00:24:24.559 [2024-05-15 03:18:55.664648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.559 [2024-05-15 03:18:55.664869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.664885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.665060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.665531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.665852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.665982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.666096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.666428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.666741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.666933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.667142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.667440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.667749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.667957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.668141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.668362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.668377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.668477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.668677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.668693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.668873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.669293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.669643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.669882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.670100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.670308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.670324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.670523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.670720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.670736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.670856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.671172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.671498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.671761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.671933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.672150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.672384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.672400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.672527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.672751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.672766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.672929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.673392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.673754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.673927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.674114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.674413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.674771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.674954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.675134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.675288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.675302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.675423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.675606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.675622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.675761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.675985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.676000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.676204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.676361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.676377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.676543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.676668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.560 [2024-05-15 03:18:55.676684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.560 qpair failed and we were unable to recover it. 00:24:24.560 [2024-05-15 03:18:55.676897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.561 [2024-05-15 03:18:55.677000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.561 [2024-05-15 03:18:55.677016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.561 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.677314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.677505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.677521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.677650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.677805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.677820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.678009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.678335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.678351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.678517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.678749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.678764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.678999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.679434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.679864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.679995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.680242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.680397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.680412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.680588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.680746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.680761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.680936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.681301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.681611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.681745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.681861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.682230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.682655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.682796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.682955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.683290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.683659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.683801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.683918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.684087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.874 [2024-05-15 03:18:55.684102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.874 qpair failed and we were unable to recover it. 00:24:24.874 [2024-05-15 03:18:55.684319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.684559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.684575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.684697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.684854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.684868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.685034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.685274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.685289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.685481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.685679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.685694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.685828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.686280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.686602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.686813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.686937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.687320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.687742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.687936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.688149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.688489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.688739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.688952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.689076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.689433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.689765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.689965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.690212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.690341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.690357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.690622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.690795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.690810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.690913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.691398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.691645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.691876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.691986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.692001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.692207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.692449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.692470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.875 qpair failed and we were unable to recover it. 00:24:24.875 [2024-05-15 03:18:55.692714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.875 [2024-05-15 03:18:55.692829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.692844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.693017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.693403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.693776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.693896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.694006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.694350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.694802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.694973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.695146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.695477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.695778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.695897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.696057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.696313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.696328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.696493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.696720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.696735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.696961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.697272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.697661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.697786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.697943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.698432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.698684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.698856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.699029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.699300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.699315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.699517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.699766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.699781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.699986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.700282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.700671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.700848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.701027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.701148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.876 [2024-05-15 03:18:55.701163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.876 qpair failed and we were unable to recover it. 00:24:24.876 [2024-05-15 03:18:55.701403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.701571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.701587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.701768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.701868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.701883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.702011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.702269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.702284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.702458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.702631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.702645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.702814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.703156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.703606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.703844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.703984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.704241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.704431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.704446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.704638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.704747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.704762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.705033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.705402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.705763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.705902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.706142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.706409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.706425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.706578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.706688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.706703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.706865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.707303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.707627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.707767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.707991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.708399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.708633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.708821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.708938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.709267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.709632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.709813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.709943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.710105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.710121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.877 qpair failed and we were unable to recover it. 00:24:24.877 [2024-05-15 03:18:55.710315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.877 [2024-05-15 03:18:55.710560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.710575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.710761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.711342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.711760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.711953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.712148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.712370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.712385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.712548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.712739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.712754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.712948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.713382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.713684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.713882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.714041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.714432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.714781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.714969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.715085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.715733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.715849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.715974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.716407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.716730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.716876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.717065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.717437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.717693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.717868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.718120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.718287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.718302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.718472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.718664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.718679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.718883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.719002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.719017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.719113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.719289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.719304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.878 qpair failed and we were unable to recover it. 00:24:24.878 [2024-05-15 03:18:55.719424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.878 [2024-05-15 03:18:55.719584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.719600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.719767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.719891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.719906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.719999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.720335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.720749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.720886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.721002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.721381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.721728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.721928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.722036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.722316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.722571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.722783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.722885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.723044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.723289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.723490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.723692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.723810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.723905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.724127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.724365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.879 [2024-05-15 03:18:55.724591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.879 [2024-05-15 03:18:55.724710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.879 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.724812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.724922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.724937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.725032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.725198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.725396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.725625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.725733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.725902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.726132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.726501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.726789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.726903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.727068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.727288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.727588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.727869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.727980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.728136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.728341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.728685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.728825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.728940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.729215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.729501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.729691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.729845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.729979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.730249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.730606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.730800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.731015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.731173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.731188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.731289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.731396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.880 [2024-05-15 03:18:55.731411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.880 qpair failed and we were unable to recover it. 00:24:24.880 [2024-05-15 03:18:55.731508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.731616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.731631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.731733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.731827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.731841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.731996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.732401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.732651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.732776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.732881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.733276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.733584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.733703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.733949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.734187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.734541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.734808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.735038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.735430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.735738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.735975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.736192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.736441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.736456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.736561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.736679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.736694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.736898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.737307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.737572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.737817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.737997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.738118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.738498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.738745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.738929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.739091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.739314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.739328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.739574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.739689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.739704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.739878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.740000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.881 [2024-05-15 03:18:55.740014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.881 qpair failed and we were unable to recover it. 00:24:24.881 [2024-05-15 03:18:55.740200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.740422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.740437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.740582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.740678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.740692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.740863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.741279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.741715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.741895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.741994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.742273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.742652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.742789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.742903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.743197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.743479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.743789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.743928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.744033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.744392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.744774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.744901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.745028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.745346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.745754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.745950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.746156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.746468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.746775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.746953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.747141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.747462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.747722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.747899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.748009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.748485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.882 qpair failed and we were unable to recover it. 00:24:24.882 [2024-05-15 03:18:55.748720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.882 [2024-05-15 03:18:55.748986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.749093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.749507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.749812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.749931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.750026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.750478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.750761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.750885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.750990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.751392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.751805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.751978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.752178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.752350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.752365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.752546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.752677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.752692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.752855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.753142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.753511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.753843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.753976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.754197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.754313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.754327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.754549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.754661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.754675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.754794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.755197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.755482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.755751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.755865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.755969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.756139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.756154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.756315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.756417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.756434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.883 qpair failed and we were unable to recover it. 00:24:24.883 [2024-05-15 03:18:55.756621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.883 [2024-05-15 03:18:55.756797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.756812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.757031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.757491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.757865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.757996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.758177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.758362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.758377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.758579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.758743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.758758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.758859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.759327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.759682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.759807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.759949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.760262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.760601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.760871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.761136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.761496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.761771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.761946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.762121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.762501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.762799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.762929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.763041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.763183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.763197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.763445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.763556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.763571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.884 qpair failed and we were unable to recover it. 00:24:24.884 [2024-05-15 03:18:55.763823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.763996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.884 [2024-05-15 03:18:55.764009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.764242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.764446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.764461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.764697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.764872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.764886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.764998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.765301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.765678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.765884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.766068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.766367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.766699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.766820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.766990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.767362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.767677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.767850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.768048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.768278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.768293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.768486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.768669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.768683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.768890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.769316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.769625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.769750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.769967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.770378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.770708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.770897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.771144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.771329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.771344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.771531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.771706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.771720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.771893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.772062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.772076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.772317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.772607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.772622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.772788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.773036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.773051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.773323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.773422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.773436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.885 qpair failed and we were unable to recover it. 00:24:24.885 [2024-05-15 03:18:55.773703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.885 [2024-05-15 03:18:55.773923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.773938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.774100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.774458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.774749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.774884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.775120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.775387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.775738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.775927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.776170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.776409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.776423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.776596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.776715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.776730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.776842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.777305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.777669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.777855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.778056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.778278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.778293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.778430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.778674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.778689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.778923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.779364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.779716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.779893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.780095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.780458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.780755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.780939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.781060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.781515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.781816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.781920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.782037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.782279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.782294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.782407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.782598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.782613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.886 qpair failed and we were unable to recover it. 00:24:24.886 [2024-05-15 03:18:55.782740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.886 [2024-05-15 03:18:55.782845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.782859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.783084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.783336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.783644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.783821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.783995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.784332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.784695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.784865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.784971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.785469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.785728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.785918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.786035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.786449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.786706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.786875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.787032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.787347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.787664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.787871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.788051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.788394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.788623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.788740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.788914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.789224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.789610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.789846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.790012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.790199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.790213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.790479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.790595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.790610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.887 qpair failed and we were unable to recover it. 00:24:24.887 [2024-05-15 03:18:55.790782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.887 [2024-05-15 03:18:55.790950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.790965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.791100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.791501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.791806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.791990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.792282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.792438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.792452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.792639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.792810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.792824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.792952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.793345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.793711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.793883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.794000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.794377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.794659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.794871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.794977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.795287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.795631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.795739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.795885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.796273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.796607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.796793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.796968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.797529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.797811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.797998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.798120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.798329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.798344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.798529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.798644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.798658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.798912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.799085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.799099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.799221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.799406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.799421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.888 qpair failed and we were unable to recover it. 00:24:24.888 [2024-05-15 03:18:55.799583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.888 [2024-05-15 03:18:55.799824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.799839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.799956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.800325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.800631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.800831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.801055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.801279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.801293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.801589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.801697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.801712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.801869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.802353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.802861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.802964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.803440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.803744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.803931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.804162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.804467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.804741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.804876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.804986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.805251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.805534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.805671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.805842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.806219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.806531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.806858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.806986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.807001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.807225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.807409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.807424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.807545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.807719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.807734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.807897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.808002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.808017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.808195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.808366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.889 [2024-05-15 03:18:55.808381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.889 qpair failed and we were unable to recover it. 00:24:24.889 [2024-05-15 03:18:55.808600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.808795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.808810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.808986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.809379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.809641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.809875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.809992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.810299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.810611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.810748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.810918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.811270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.811665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.811853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.811970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.812277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.812626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.812739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.812871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.813236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.813627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.813766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.813941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.814296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.814534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.890 qpair failed and we were unable to recover it. 00:24:24.890 [2024-05-15 03:18:55.814830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.890 [2024-05-15 03:18:55.814945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.814959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.815049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.815395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.815665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.815891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.815993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.816114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.816415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.816726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.816836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.816943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.817153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.817440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.817779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.817883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.818042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.818304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.818526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.818761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.818857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.818949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.819244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.819461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.819671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.819779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.819936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.820264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.820540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.820881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.820993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.821008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.821098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.821259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.821272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.891 qpair failed and we were unable to recover it. 00:24:24.891 [2024-05-15 03:18:55.821380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.891 [2024-05-15 03:18:55.821497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.821510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.821611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.821765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.821779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.821872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.821999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.822111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.822319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.822528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.822734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.822893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.823162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.823436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.823695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.823808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.823981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.824286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.824595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.824856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.824952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.825043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.825292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.825515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.825739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.825857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.825950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.826164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.826391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.826732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.826846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.826937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.827139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.827356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.827615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.827798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.827907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.828011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.828099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.828112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.828272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.828369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.828382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.892 qpair failed and we were unable to recover it. 00:24:24.892 [2024-05-15 03:18:55.828538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.892 [2024-05-15 03:18:55.828640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.828653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.828772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.828929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.828942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.829058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.829346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.829615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.829718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.829872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.830139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.830332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.830561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.830766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.830878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.830992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.831178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.831571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.831800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.831909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.832005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.832203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.832514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.832779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.832878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.832966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.833153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.833372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.833654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.833755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.833921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.834177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.834461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.834687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.834853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.893 qpair failed and we were unable to recover it. 00:24:24.893 [2024-05-15 03:18:55.834940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.893 [2024-05-15 03:18:55.835056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.835247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.835545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.835741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.835927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.836078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.836488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.836765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.836873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.836980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.837260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.837499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.837739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.837841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.837943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.838157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.838341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.838716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.838828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.838985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.839183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.839504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.839819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.839937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.840046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.840287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.840300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.840548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.840704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.840717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.840965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.841184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.841461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.841698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.841802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.841966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.842248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.894 qpair failed and we were unable to recover it. 00:24:24.894 [2024-05-15 03:18:55.842606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.894 [2024-05-15 03:18:55.842719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.842816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.842985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.842998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.843165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.843574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.843844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.843956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.844080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.844309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.844322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.844492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.844577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.844590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.844755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.844994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.845007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.845217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.845462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.845483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.845648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.845868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.845881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.845973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.846080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.846093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.846357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.846544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.846557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.846803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.847239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.847775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.847901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.848136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.848307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.848320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.848436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.848696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.848710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.848866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.849388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.849702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.849934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.850108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.850264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.850276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.895 [2024-05-15 03:18:55.850521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.850758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.895 [2024-05-15 03:18:55.850771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.895 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.850884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.851340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.851790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.851998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.852099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.852350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.852363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.852552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.852853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.852866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.853031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.853247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.853260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.853498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.853664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.853677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.853930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.854125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.854138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.854384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.854545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.854564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.854806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.855067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.855080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.855304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.855572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.855586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.855838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.856053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.856066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.856382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.856624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.856637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.856884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.857126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.857139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.857390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.857603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.857617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.857815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.858055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.858068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.858244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.858432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.858444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.858733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.859299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.859676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.859909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.860096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.860199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.860212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.860402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.860650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.860663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.860842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.861305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.861693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.861869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.896 qpair failed and we were unable to recover it. 00:24:24.896 [2024-05-15 03:18:55.862028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.896 [2024-05-15 03:18:55.862205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.862218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.862451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.862693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.862706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.862881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.863048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.863061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.863291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.863525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.863539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.863821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.864194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.864657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.864863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.865057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.865231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.865244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.865347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.865579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.865593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.865789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.866310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.866588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.866846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.867115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.867353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.867366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.867587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.867831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.867844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.868031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.868279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.868293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.868450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.868644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.868658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.869053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.869067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.869312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.869535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.869549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.869801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.869999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.870012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.870257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.870475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.870489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.870681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.870904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.870917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.871030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.871275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.871289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.871470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.871645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.871658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.871881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.872394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.872766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.872975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.897 qpair failed and we were unable to recover it. 00:24:24.897 [2024-05-15 03:18:55.873207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.897 [2024-05-15 03:18:55.873479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.873492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.873663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.873882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.873895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.874079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.874314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.874328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.874489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.874643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.874656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.874861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.875252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.875705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.875888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.876041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.876283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.876296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.876470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.876639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.876655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.876906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.877135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.877149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.877417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.877666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.877682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.877881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.877995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.878010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.878107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.878270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.878283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.878459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.878663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.878678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.878916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.879139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.879153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.879348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.879591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.879605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.879776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.879989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.880002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.880166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.880340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.880353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.880541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.880763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.880776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.880944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.881185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.881199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.881369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.881618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.881631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.881831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.882262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.882725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.882957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.883232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.883633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.883865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.883997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.898 qpair failed and we were unable to recover it. 00:24:24.898 [2024-05-15 03:18:55.884232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.884517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.898 [2024-05-15 03:18:55.884530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.884770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.885132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.885631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.885811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.886032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.886209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.886222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.886374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.886593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.886606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.886836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.887278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.887634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.887745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.887989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.888303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.888600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.888805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.888966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.889363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.889794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.889900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.890166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.890358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.890372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.890552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.890800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.890813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.891038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.891227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.891239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.891684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.891698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.891929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.892043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.892056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.892349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.892525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.892539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.892822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.893236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.893604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.893773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.893947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.894235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.899 [2024-05-15 03:18:55.894629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.899 [2024-05-15 03:18:55.894836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.899 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.895012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.895235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.895248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.895421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.895612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.895626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.895824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.896324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.896793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.896998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.897224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.897454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.897477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.897649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.897817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.897830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.898056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.898325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.898775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.898972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.899136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.899255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.899269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.899443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.899634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.899648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.899824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.899988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.900121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.900459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.900819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.900902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.901132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.901351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.901364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.901596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.901769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.901783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.901938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.902224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.900 qpair failed and we were unable to recover it. 00:24:24.900 [2024-05-15 03:18:55.902517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.900 [2024-05-15 03:18:55.902626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.902740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.902838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.902851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.903006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.903359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.903654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.903833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.904001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.904364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.904723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.904827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.905048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.905435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.905703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.905828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.905986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.906269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.906615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.906727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.906950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.907281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.907567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.907753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.907911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.908259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.908586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.908702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.908924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.909238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.909470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.909731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.909908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.910018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.910314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.910670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.910859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.901 qpair failed and we were unable to recover it. 00:24:24.901 [2024-05-15 03:18:55.911029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.901 [2024-05-15 03:18:55.911190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.911203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.911321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.911481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.911495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.911719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.911934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.911947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.912048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.912483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.912768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.912935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.913045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.913326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.913686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.913850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.914017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.914246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.914504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.914813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.914937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.915162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.915294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.915312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.915583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.915710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.915725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.916008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.916473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.916736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.916923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.917022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.917352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.917729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.917962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.918129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.918363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.918651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.918836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.918938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.919101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.919114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.902 qpair failed and we were unable to recover it. 00:24:24.902 [2024-05-15 03:18:55.919272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.902 [2024-05-15 03:18:55.919495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.919508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.919675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.919761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.919774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.919964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.920314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.920608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.920780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.920889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.921278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.921552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.921815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.921931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.922139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.922333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.922346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.922520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.922682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.922695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.922938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.923227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.923597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.923815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.923929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.924017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.924226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.924498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.924752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.924899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.925280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.925626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.925751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.925907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.926247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.926596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.926716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.903 qpair failed and we were unable to recover it. 00:24:24.903 [2024-05-15 03:18:55.926888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.903 [2024-05-15 03:18:55.927002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.927015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.927295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.927402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.927415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.927505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.927663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.927677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.927871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.928250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.928526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.928757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.928926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.929102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.929517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.929791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.929920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.930089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.930383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.930615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.930726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.930909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.931115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.931437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.931694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.931871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.932162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.932461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.932737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.932909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.933133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.933531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.933804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.933973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.934059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.934210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.934223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.934394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.934499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.934512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.904 qpair failed and we were unable to recover it. 00:24:24.904 [2024-05-15 03:18:55.934612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.904 [2024-05-15 03:18:55.934716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.934729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.934823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.934993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.935087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.935292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.935595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.935715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.935804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.936207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.936552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.936731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.936891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.937183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.937488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.937761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.937912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.938117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.938334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.905 [2024-05-15 03:18:55.938512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.905 qpair failed and we were unable to recover it. 00:24:24.905 [2024-05-15 03:18:55.938677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.938843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.938856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.939049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.939424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.939765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.939891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.940082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.940418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.940691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.940814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.941058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.941331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.941623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.941758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.941857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.942207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.942457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.942661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.942765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.942873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.943205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.943628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.943762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.943919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.944358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.944705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.944833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.944954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.945512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.945800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.945905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.946151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.946236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.946249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.906 qpair failed and we were unable to recover it. 00:24:24.906 [2024-05-15 03:18:55.946404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.906 [2024-05-15 03:18:55.946559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.946573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.946678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.946785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.946798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.946902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.947210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.947543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.947763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.947928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.948090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.948311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.948654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.948845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.949090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.949221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.949234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.949422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.949623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.949637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.949888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.949997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.950010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.950235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.950406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.950420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.950588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.950760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.950773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.950927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.951331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.951699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.951948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.952054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.952391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.952738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.952923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.953030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.953374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.953687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.953863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.954020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.954388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.954695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.954861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.955024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.955113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.907 [2024-05-15 03:18:55.955127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.907 qpair failed and we were unable to recover it. 00:24:24.907 [2024-05-15 03:18:55.955217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.955439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.955452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.955674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.955825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.955838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.955940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.956307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.956607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.956710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.956936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.957273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.957598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.957781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.957957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.958351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.958687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.958886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.959080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.959359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.959592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.959852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.960017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.960225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.960515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.960777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.961021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.961475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.961818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.961993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.962160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.962275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.962288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.962457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.962691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.962705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.962886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.963166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.963180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.963353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.963507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.908 [2024-05-15 03:18:55.963521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.908 qpair failed and we were unable to recover it. 00:24:24.908 [2024-05-15 03:18:55.963642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.963756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.963769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.963939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.964360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.964725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.964863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.964970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.965328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.965549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.965765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.965948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.966054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.966377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.966712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.966841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.966994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.967348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.967681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.967874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.968032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.968250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.968622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.968724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.968909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.969182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.969386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.969649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.969875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.969983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.970206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.970398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.970631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.970748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.970852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.971008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.971021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.909 qpair failed and we were unable to recover it. 00:24:24.909 [2024-05-15 03:18:55.971183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.909 [2024-05-15 03:18:55.971287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.971300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.971472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.971728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.971741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.971942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.972283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.972580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.972754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.972910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.973216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.973502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.973850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.973974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.974163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.974524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.974809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.974985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.975094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.975401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.975668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.975854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.976007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.976353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.976751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.976935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.977037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.977307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.977594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.977839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.977958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.978097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.978205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.978217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.978383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.978495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.910 [2024-05-15 03:18:55.978509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f4c10 with addr=10.0.0.2, port=4420 00:24:24.910 qpair failed and we were unable to recover it. 00:24:24.910 [2024-05-15 03:18:55.978611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.978816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.978831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.979009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.979207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.979524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.979704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.979816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.980107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.980365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.980619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.980745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.980845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.981187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.981468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.981762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.981877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.982134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.982332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.982681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.982866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.983026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.983349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.983716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.983895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.984099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.984257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.984271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.984377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.984491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.984506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.911 qpair failed and we were unable to recover it. 00:24:24.911 [2024-05-15 03:18:55.984614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.911 [2024-05-15 03:18:55.984729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.984743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.984864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.985151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.985433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.985660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.985769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.985873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.986226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.986508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.986703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.986826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.987001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.987299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.987608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.987719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.987888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.988180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.988531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.988727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.988914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.988986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.989393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.989688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.989929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.990083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.990502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.990810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.990922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.991059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.991328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.912 [2024-05-15 03:18:55.991672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.912 [2024-05-15 03:18:55.991863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.912 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.991965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.992375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.992730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.992943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.993095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.993444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.993749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.993868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.993970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.994235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.994496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.994697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.994869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.994974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.995305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.995702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.995923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.996152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.996419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.996692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.996883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.997074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.997246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.997260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.997459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.997570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.997584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.997756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.998245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.998594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.998828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.999054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.999448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.913 [2024-05-15 03:18:55.999753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.913 [2024-05-15 03:18:55.999931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.913 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.000102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.000367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.000622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.000809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.000917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.001112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.001420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.001629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.001856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.001950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.002227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.002462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.002751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.002919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.003158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.003512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.003737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.003906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.004140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.004402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.004650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.004858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.005025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.005366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.005700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.005834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.005922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.006326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.006650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.006828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.006988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.007191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.007612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.007716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.007911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.008072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.008086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.914 [2024-05-15 03:18:56.008229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.008338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.914 [2024-05-15 03:18:56.008352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.914 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.008530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.008776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.008790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.008880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.008993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.009091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.009461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.009772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.009956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.010136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.010548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.010785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.010986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.011163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.011374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.011638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.011743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.011858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.012239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.012536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.012800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.012968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.013125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.013391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.013675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.013777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.013947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.014186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.014533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.014797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.014970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.015085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.015248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.015262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.015416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.015518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.015532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:24.915 qpair failed and we were unable to recover it. 00:24:24.915 [2024-05-15 03:18:56.015708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.915 [2024-05-15 03:18:56.015882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.015897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.016006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.016277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.016543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.016821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.016940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.017058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.017381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.017689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.017805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.017962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.018311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.018573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.018783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.018905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.019018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.019285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.019626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.019805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.019958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.020184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.020473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.020712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.020818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.021216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.021478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.021712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.021817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.022008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.022188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.022343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.022456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.022474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.183 qpair failed and we were unable to recover it. 00:24:25.183 [2024-05-15 03:18:56.022576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.183 [2024-05-15 03:18:56.022738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.022751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.022860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.022967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.022981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.023157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.023335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.023349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.023568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.023654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.023668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.023940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.024290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.024636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.024885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.025058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.025307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.025320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.025493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.025666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.025680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.025857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.026338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.026682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.026836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.026939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.027284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.027589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.027931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.028092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.028394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.028692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.028866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.028962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.029306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.029603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.029769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.029948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.030154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.030435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.030737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.030926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.031091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.031257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.031271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.031518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.031732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.031745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.031946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.032070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.032084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.184 qpair failed and we were unable to recover it. 00:24:25.184 [2024-05-15 03:18:56.032239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.032348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.184 [2024-05-15 03:18:56.032361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.032603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.032784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.032797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.033020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.033452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.033691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.033873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.033997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.034345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.034697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.034871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.035017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.035242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.035603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.035810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.035924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.036222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.036574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.036697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.036787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.037183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.037556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.037668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.037766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.038194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.038621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.038827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.038996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.039403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.039721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.039832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.039989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.040340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.040572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.040751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.040915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.041239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.041510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.041588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.185 [2024-05-15 03:18:56.041786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.042010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.185 [2024-05-15 03:18:56.042024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.185 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.042203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.042391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.042404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.042574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.042688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.042701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.042863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.043156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.043424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.043642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.043827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.044000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.044326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.044685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.044870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.045095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.045578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.045795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.045911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.046035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.046196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.046208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.046493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.046653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.046666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.046865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.047274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.047716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.047947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.048220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.048439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.048455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.048684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.048850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.048864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.048980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.049262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.049473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.049769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.049992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.050306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.186 [2024-05-15 03:18:56.050662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.186 [2024-05-15 03:18:56.050913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.186 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.051115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.051305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.051319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.051493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.051762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.051775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.051946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.052168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.052183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.052407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.052585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.052598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.052768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.052986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.053000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.053231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.053474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.053488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.053716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.053879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.053892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.054148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.054367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.054380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.054548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.054716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.054730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.055329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.055706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.055899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.056086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.056251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.056267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.056423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.056666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.056680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.056960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.057369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.057804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.057979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.058178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.058430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.058443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.058710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.058863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.058876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.059075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.059244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.059258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.059486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.059679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.059692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.059918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.060150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.060163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.060402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.060625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.187 [2024-05-15 03:18:56.060638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.187 qpair failed and we were unable to recover it. 00:24:25.187 [2024-05-15 03:18:56.060884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.061291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.061509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.061695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.061935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.062176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.062189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.062410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.062680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.062693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.062942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.063172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.063185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.063375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.063616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.063630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.063853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.064302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.064761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.064962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.065137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.065231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.065244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.065401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.065658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.065672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.065830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.066288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.066656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.066939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.067195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.067418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.067431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.067675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.067831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.067845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.068019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.068135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.068148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.068325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.068589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.068603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.068822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.068994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.069007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.069255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.069500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.069514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.069689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.069883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.069896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.070013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.070233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.070246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.070432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.070669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.070682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.070925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.071105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.071118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.071365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.071607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.071620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.071853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.072305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.072717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.072902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.188 [2024-05-15 03:18:56.073064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.073283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.188 [2024-05-15 03:18:56.073296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.188 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.073532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.073754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.073768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.074012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:25.189 [2024-05-15 03:18:56.074283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.074297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:25.189 [2024-05-15 03:18:56.074539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.189 [2024-05-15 03:18:56.074775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.074790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.074989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.189 [2024-05-15 03:18:56.075165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.075178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.189 [2024-05-15 03:18:56.075345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.075588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.075601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.075792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.075898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.075911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.076084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.076240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.076253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.076438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.076662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.076675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.076787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.077190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.077490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.077693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.077880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.078179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.078194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.078372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.078633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.078647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.078844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.079275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.079511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.079722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.079972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.080197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.080210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.080384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.080572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.080585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.080748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.081235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.081753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.081956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.082216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.082476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.082490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.082723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.082897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.082910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.083035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.083262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.083276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.083520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.083742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.083755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.083922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.084216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.189 [2024-05-15 03:18:56.084638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.189 [2024-05-15 03:18:56.084879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.189 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.085128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.085303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.085319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.085511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.085765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.085779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.086020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.086363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.086705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.086839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.086966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.087367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.087708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.087845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.088046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.088422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.088740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.088928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.089166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.089340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.089353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.089582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.089755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.089768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.090002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.090197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.090210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.090404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.090600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.090614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.090838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.091286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.091766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.091999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.092311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.092494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.092507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.092624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.092794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.092807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.092929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.093424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.093688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.093932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.094028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.094414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.094871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.095049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.095325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.095338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.095512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.095720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.095734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.095905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.096075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.190 [2024-05-15 03:18:56.096090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.190 qpair failed and we were unable to recover it. 00:24:25.190 [2024-05-15 03:18:56.096269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.096392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.096407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.096669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.096771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.096784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.096898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.097372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.097618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.097813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.097988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.098377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.098687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.098875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.099065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.099503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.099862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.099982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.100079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.100343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.100647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.100837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.101022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.101255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.101270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.101474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.101720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.101735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.101861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.102262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.102638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.102756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.102937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.103440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.103760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.103942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.104063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.104448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.104672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.104873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.104995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.105359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.105718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.105923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.106230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.106348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.106361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.191 qpair failed and we were unable to recover it. 00:24:25.191 [2024-05-15 03:18:56.106534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.191 [2024-05-15 03:18:56.106706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.106719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.106891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.107262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.107676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.107884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.108009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.108447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.108853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.108996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.109118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.109362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.109376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.109546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.192 [2024-05-15 03:18:56.109704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.109719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.109829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:25.192 [2024-05-15 03:18:56.110031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.110046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.192 [2024-05-15 03:18:56.110348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.110461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.110480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.192 [2024-05-15 03:18:56.110617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.110721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.110734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.110917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.111328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.111790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.111974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.112085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.112279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.112292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.112523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.112649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.112662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.112792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.113302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.113700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.113837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.113993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.114390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.114648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.114843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.114948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.115071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.192 [2024-05-15 03:18:56.115083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.192 qpair failed and we were unable to recover it. 00:24:25.192 [2024-05-15 03:18:56.115312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.115410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.115423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.115653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.115768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.115781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.115939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.116341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.116684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.116851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.117079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.117515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.117761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.117939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.118046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.118483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.118756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.118944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.119072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.119423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.119727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.119900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.120027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.120222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.120235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.120460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.120637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.120650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.120809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.121249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.121615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.121796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.121973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.122348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.122625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.122755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.122932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.123374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.123799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.123916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.124030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.124372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.124723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.124856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.125042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.125283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.125297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.193 qpair failed and we were unable to recover it. 00:24:25.193 [2024-05-15 03:18:56.125545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.193 [2024-05-15 03:18:56.125765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.125779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.125896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.126196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.126619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.126750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.126918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.127314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.127702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.127942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.128119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.128293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.128307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.128584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.128812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.128826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.129010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.129275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.129590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.129845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.130012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.130301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.130634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.130746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.130933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 Malloc0 00:24:25.194 [2024-05-15 03:18:56.131361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.131672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.131792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.131961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.194 [2024-05-15 03:18:56.132215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.132228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.132395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:25.194 [2024-05-15 03:18:56.132561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.132575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.194 [2024-05-15 03:18:56.132743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.132897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.132910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.194 [2024-05-15 03:18:56.133109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.133280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.133293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.133504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.133599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.133613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.133835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.134211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.134645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.134877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.135117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.135477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.194 qpair failed and we were unable to recover it. 00:24:25.194 [2024-05-15 03:18:56.135776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.194 [2024-05-15 03:18:56.135943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.136200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.136301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.136314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.136495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.136694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.136707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.136830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.137318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.137635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.137866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.137993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.138236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.138249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.138422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.138733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.138747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.138864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.195 [2024-05-15 03:18:56.138921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.139114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.139126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.139283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.139501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.139514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.139788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.140305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.140664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.140923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.141051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.141335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.141742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.141960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.142192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.142413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.142426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.142611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.142843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.142857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.142966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.143207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.143220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.143500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.143723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.143737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.143963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.144388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.144603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.144841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.144968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.145229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.145242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.145476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.145707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.145720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.145960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.146196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.146209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.146482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.146580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.146594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.146788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 [2024-05-15 03:18:56.147202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.195 qpair failed and we were unable to recover it. 00:24:25.195 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.195 [2024-05-15 03:18:56.147580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.195 [2024-05-15 03:18:56.147849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.196 [2024-05-15 03:18:56.148038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.196 [2024-05-15 03:18:56.148286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.148300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.196 [2024-05-15 03:18:56.148550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.148771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.148784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.148951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.149356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.149775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.149954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.150200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.150440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.150453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.150633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.150870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.150883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.151146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.151330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.151344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.151511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.151778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.151794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.152047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.152314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.152327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.152557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.152748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.152761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.153006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.153281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.153294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.153463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.153729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.153742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.153981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.154326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.154666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.154858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.155084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.155318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.155331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.155518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.196 [2024-05-15 03:18:56.155702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.155715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.196 [2024-05-15 03:18:56.155991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.156145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.156158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.196 [2024-05-15 03:18:56.156431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.196 [2024-05-15 03:18:56.156630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.156645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.156939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.157177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.157190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.157430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.157669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.157683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.196 qpair failed and we were unable to recover it. 00:24:25.196 [2024-05-15 03:18:56.157936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.196 [2024-05-15 03:18:56.158123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.158136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.158313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.158579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.158592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.158838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.159266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.159686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.159872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.160139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.160329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.160342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.160453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.160646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.160659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.160816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.161236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.161696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.161984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.162149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.162302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.162315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.162489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.162681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.162694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.162870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.163335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.197 [2024-05-15 03:18:56.163744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.163945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.197 [2024-05-15 03:18:56.164069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.164224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.164238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.197 [2024-05-15 03:18:56.164412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.197 [2024-05-15 03:18:56.164659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.164673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.164938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.165175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.165189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.165440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.165715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.165730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.165957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.166204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.166217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.166394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.166635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.166649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.166844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.166902] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:25.197 [2024-05-15 03:18:56.167071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.197 [2024-05-15 03:18:56.167085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2004000b90 with addr=10.0.0.2, port=4420 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 [2024-05-15 03:18:56.167120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.197 [2024-05-15 03:18:56.169430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.197 [2024-05-15 03:18:56.169526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.197 [2024-05-15 03:18:56.169549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.197 [2024-05-15 03:18:56.169559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.197 [2024-05-15 03:18:56.169572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.197 [2024-05-15 03:18:56.169596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.197 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.197 [2024-05-15 03:18:56.179439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.197 [2024-05-15 03:18:56.179520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.197 [2024-05-15 03:18:56.179541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.197 [2024-05-15 03:18:56.179551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.197 [2024-05-15 03:18:56.179560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.197 [2024-05-15 03:18:56.179580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.197 qpair failed and we were unable to recover it. 00:24:25.198 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.198 03:18:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 1172796 00:24:25.198 [2024-05-15 03:18:56.189379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.189438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.189454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.189461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.189471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.189485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.199368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.199434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.199450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.199456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.199462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.199482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.209370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.209434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.209453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.209460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.209472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.209487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.219357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.219417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.219432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.219439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.219445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.219459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.229448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.229546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.229562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.229568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.229574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.229589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.239528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.239597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.239612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.239618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.239624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.239639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.249512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.249614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.249629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.249636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.249644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.249659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.259529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.259587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.259604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.259611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.259617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.259631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.269546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.269653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.269672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.269679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.269685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.269699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.279530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.279594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.279608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.279615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.279621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.279635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.289628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.289736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.289755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.289762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.289768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.289783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.299675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.299785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.299801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.299807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.299813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.299828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.309660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.309723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.309738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.198 [2024-05-15 03:18:56.309744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.198 [2024-05-15 03:18:56.309750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.198 [2024-05-15 03:18:56.309764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.198 qpair failed and we were unable to recover it. 00:24:25.198 [2024-05-15 03:18:56.319686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.198 [2024-05-15 03:18:56.319750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.198 [2024-05-15 03:18:56.319765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.199 [2024-05-15 03:18:56.319771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.199 [2024-05-15 03:18:56.319777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.199 [2024-05-15 03:18:56.319792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.199 qpair failed and we were unable to recover it. 00:24:25.199 [2024-05-15 03:18:56.329717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.199 [2024-05-15 03:18:56.329782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.199 [2024-05-15 03:18:56.329797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.199 [2024-05-15 03:18:56.329803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.199 [2024-05-15 03:18:56.329809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.199 [2024-05-15 03:18:56.329822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.199 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.339763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.339822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.339837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.339846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.339852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.339866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.349782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.349844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.349859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.349865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.349871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.349885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.359808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.359869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.359884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.359890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.359897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.359910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.369884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.369950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.369964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.369972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.369977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.369991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.379898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.379970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.379985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.379991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.379997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.380011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.389890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.389945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.389959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.389965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.389971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.389985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.399932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.400007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.400023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.400030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.400036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.400051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.410051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.410119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.410134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.410140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.410146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.410160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.420038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.420115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.420130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.420136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.420142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.459 [2024-05-15 03:18:56.420155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.459 qpair failed and we were unable to recover it. 00:24:25.459 [2024-05-15 03:18:56.430072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.459 [2024-05-15 03:18:56.430134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.459 [2024-05-15 03:18:56.430149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.459 [2024-05-15 03:18:56.430158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.459 [2024-05-15 03:18:56.430164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.430177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.440069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.440130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.440145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.440152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.440158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.440171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.450087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.450144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.450159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.450166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.450172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.450186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.460125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.460186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.460201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.460207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.460214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.460228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.470126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.470181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.470195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.470202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.470208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.470221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.480136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.480199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.480214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.480220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.480226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.480239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.490120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.490185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.490200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.490206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.490212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.490226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.500186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.500246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.500261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.500268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.500274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.500288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.510236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.510292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.510306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.510313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.510319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.510332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.520292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.520398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.520416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.520423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.520429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.520443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.530331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.530394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.530409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.530416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.530422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.530437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.540311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.540367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.540383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.540391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.540397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.540411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.550356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.550448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.550463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.550474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.550480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.550496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.560428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.560498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.560513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.560520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.460 [2024-05-15 03:18:56.560526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.460 [2024-05-15 03:18:56.560543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.460 qpair failed and we were unable to recover it. 00:24:25.460 [2024-05-15 03:18:56.570387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.460 [2024-05-15 03:18:56.570455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.460 [2024-05-15 03:18:56.570474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.460 [2024-05-15 03:18:56.570481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.461 [2024-05-15 03:18:56.570487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.461 [2024-05-15 03:18:56.570502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.461 qpair failed and we were unable to recover it. 00:24:25.461 [2024-05-15 03:18:56.580473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.461 [2024-05-15 03:18:56.580534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.461 [2024-05-15 03:18:56.580550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.461 [2024-05-15 03:18:56.580557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.461 [2024-05-15 03:18:56.580563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.461 [2024-05-15 03:18:56.580577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.461 qpair failed and we were unable to recover it. 00:24:25.461 [2024-05-15 03:18:56.590380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.461 [2024-05-15 03:18:56.590440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.461 [2024-05-15 03:18:56.590455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.461 [2024-05-15 03:18:56.590462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.461 [2024-05-15 03:18:56.590472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.461 [2024-05-15 03:18:56.590486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.461 qpair failed and we were unable to recover it. 00:24:25.461 [2024-05-15 03:18:56.600472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.461 [2024-05-15 03:18:56.600532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.461 [2024-05-15 03:18:56.600547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.461 [2024-05-15 03:18:56.600555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.461 [2024-05-15 03:18:56.600561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.461 [2024-05-15 03:18:56.600576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.461 qpair failed and we were unable to recover it. 00:24:25.461 [2024-05-15 03:18:56.610503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.461 [2024-05-15 03:18:56.610576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.461 [2024-05-15 03:18:56.610596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.461 [2024-05-15 03:18:56.610603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.461 [2024-05-15 03:18:56.610609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.461 [2024-05-15 03:18:56.610623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.461 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.620460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.620535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.620550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.620558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.620563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.620578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.630559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.630626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.630641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.630649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.630655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.630670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.640598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.640663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.640678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.640685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.640691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.640706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.650611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.650673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.650687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.650695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.650704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.650719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.660674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.660784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.660801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.660808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.660814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.660830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.670728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.670832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.670847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.721 [2024-05-15 03:18:56.670854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.721 [2024-05-15 03:18:56.670860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.721 [2024-05-15 03:18:56.670874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.721 qpair failed and we were unable to recover it. 00:24:25.721 [2024-05-15 03:18:56.680716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.721 [2024-05-15 03:18:56.680780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.721 [2024-05-15 03:18:56.680795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.680802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.680808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.680823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.690736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.690795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.690810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.690817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.690823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.690837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.700769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.700886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.700901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.700908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.700914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.700928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.710770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.710861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.710875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.710882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.710888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.710903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.720848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.720954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.720968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.720975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.720981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.720995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.730828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.730898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.730913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.730920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.730926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.730941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.740875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.740938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.740953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.740960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.740969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.740983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.750913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.750973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.750988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.750995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.751002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.751016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.760944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.761005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.761020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.761027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.761033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.761047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.770969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.771030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.771045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.771053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.771060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.771073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.781019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.781104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.781119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.781127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.781132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.781146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.791036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.791112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.791127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.791134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.791141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.791155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.801045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.801106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.801120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.801128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.801134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.801148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.811068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.811127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.722 [2024-05-15 03:18:56.811142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.722 [2024-05-15 03:18:56.811150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.722 [2024-05-15 03:18:56.811156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.722 [2024-05-15 03:18:56.811171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.722 qpair failed and we were unable to recover it. 00:24:25.722 [2024-05-15 03:18:56.821091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.722 [2024-05-15 03:18:56.821148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.821163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.821170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.821177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.821191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.831121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.831184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.831200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.831210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.831217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.831231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.841171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.841241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.841256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.841263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.841269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.841283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.851186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.851266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.851281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.851289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.851295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.851309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.861230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.861288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.861304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.861311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.861317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.861333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.871180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.871236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.871252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.871259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.871265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.871280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.723 [2024-05-15 03:18:56.881208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.723 [2024-05-15 03:18:56.881301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.723 [2024-05-15 03:18:56.881317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.723 [2024-05-15 03:18:56.881325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.723 [2024-05-15 03:18:56.881331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.723 [2024-05-15 03:18:56.881346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.723 qpair failed and we were unable to recover it. 00:24:25.983 [2024-05-15 03:18:56.891300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.983 [2024-05-15 03:18:56.891363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.983 [2024-05-15 03:18:56.891378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.983 [2024-05-15 03:18:56.891386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.983 [2024-05-15 03:18:56.891393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.983 [2024-05-15 03:18:56.891407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.983 qpair failed and we were unable to recover it. 00:24:25.983 [2024-05-15 03:18:56.901325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.983 [2024-05-15 03:18:56.901387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.983 [2024-05-15 03:18:56.901402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.983 [2024-05-15 03:18:56.901411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.983 [2024-05-15 03:18:56.901418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.983 [2024-05-15 03:18:56.901432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.983 qpair failed and we were unable to recover it. 00:24:25.983 [2024-05-15 03:18:56.911373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.983 [2024-05-15 03:18:56.911440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.983 [2024-05-15 03:18:56.911456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.983 [2024-05-15 03:18:56.911463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.911474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.911490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.921334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.921395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.921414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.921421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.921427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.921442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.931347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.931407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.931422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.931429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.931435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.931450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.941367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.941437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.941453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.941461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.941471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.941485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.951446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.951530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.951545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.951554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.951561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.951575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.961456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.961521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.961536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.961544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.961551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.961568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.971475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.971538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.971553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.971560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.971566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.971581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.981564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.981620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.981634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.981642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.981648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.981662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:56.991530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:56.991593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:56.991608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:56.991615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:56.991621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:56.991636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:57.001632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:57.001690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:57.001705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:57.001712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:57.001718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:57.001733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:57.011593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:57.011659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:57.011677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:57.011685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:57.011691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:57.011705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:57.021668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:57.021749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:57.021764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:57.021772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:57.021778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:57.021792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:57.031650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:57.031707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:57.031721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:57.031729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:57.031735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.984 [2024-05-15 03:18:57.031749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.984 qpair failed and we were unable to recover it. 00:24:25.984 [2024-05-15 03:18:57.041690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.984 [2024-05-15 03:18:57.041752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.984 [2024-05-15 03:18:57.041767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.984 [2024-05-15 03:18:57.041775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.984 [2024-05-15 03:18:57.041782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.041796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.051687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.051753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.051768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.051776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.051785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.051799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.061719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.061782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.061798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.061805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.061811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.061824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.071826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.071921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.071935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.071942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.071948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.071962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.081864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.081923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.081938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.081945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.081952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.081967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.091895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.091958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.091974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.091981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.091987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.092001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.101852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.101916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.101931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.101938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.101944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.101958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.111948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.112011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.112027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.112035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.112041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.112057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.121923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.121987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.122002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.122011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.122017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.122031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.131942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.132040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.132054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.132062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.132068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.132082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:25.985 [2024-05-15 03:18:57.141979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.985 [2024-05-15 03:18:57.142041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.985 [2024-05-15 03:18:57.142056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.985 [2024-05-15 03:18:57.142064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.985 [2024-05-15 03:18:57.142073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:25.985 [2024-05-15 03:18:57.142087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:25.985 qpair failed and we were unable to recover it. 00:24:26.245 [2024-05-15 03:18:57.151987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.245 [2024-05-15 03:18:57.152056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.152071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.152078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.152084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.152098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.162047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.162109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.162125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.162132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.162138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.162152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.172064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.172126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.172140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.172148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.172154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.172168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.182141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.182203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.182220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.182228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.182234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.182248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.192183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.192239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.192254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.192262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.192269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.192283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.202143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.202205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.202221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.202228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.202234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.202248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.212225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.212287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.212302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.212310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.212316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.212330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.222229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.222287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.222303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.222310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.222317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.222331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.232306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.232369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.232384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.232395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.232401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.232416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.242353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.242416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.242432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.242439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.242445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.242459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.252283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.252345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.252360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.252367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.252374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.252388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.262395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.262452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.262472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.262479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.262486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.262501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.272392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.272493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.272508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.272515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.272521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.272535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.246 qpair failed and we were unable to recover it. 00:24:26.246 [2024-05-15 03:18:57.282449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.246 [2024-05-15 03:18:57.282516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.246 [2024-05-15 03:18:57.282531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.246 [2024-05-15 03:18:57.282538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.246 [2024-05-15 03:18:57.282545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.246 [2024-05-15 03:18:57.282560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.292474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.292532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.292547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.292554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.292561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.292576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.302485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.302548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.302563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.302570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.302576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.302591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.312533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.312592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.312607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.312614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.312620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.312635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.322571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.322639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.322657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.322664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.322670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.322685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.332536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.332599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.332613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.332621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.332627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.332642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.342613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.342684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.342699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.342706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.342712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.342726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.352634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.352694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.352709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.352716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.352722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.352737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.362663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.362725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.362740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.362747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.362753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.362770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.372695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.372753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.372768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.372775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.372782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.372796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.382725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.382786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.382801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.382808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.382814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.382828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.392769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.392831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.392846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.392853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.392859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.392874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.247 [2024-05-15 03:18:57.402819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.247 [2024-05-15 03:18:57.402926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.247 [2024-05-15 03:18:57.402940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.247 [2024-05-15 03:18:57.402947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.247 [2024-05-15 03:18:57.402953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.247 [2024-05-15 03:18:57.402967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.247 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.412800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.412869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.412887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.412895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.412901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.412915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.422831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.422891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.422905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.422912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.422918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.422932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.432856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.432916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.432931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.432938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.432944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.432960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.442962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.443023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.443038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.443045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.443051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.443066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.452986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.453095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.453110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.453117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.453124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.453141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.462952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.463013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.463028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.463035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.463041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.463055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.473004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.473071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.473085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.473093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.473099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.473113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.483043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.483111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.483125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.483132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.483138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.483153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.493041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.493100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.493116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.493123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.493130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.493144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.503074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.503140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.503155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.503162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.503168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.503182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.513093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.513150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.508 [2024-05-15 03:18:57.513166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.508 [2024-05-15 03:18:57.513173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.508 [2024-05-15 03:18:57.513180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.508 [2024-05-15 03:18:57.513195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.508 qpair failed and we were unable to recover it. 00:24:26.508 [2024-05-15 03:18:57.523135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.508 [2024-05-15 03:18:57.523230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.523244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.523251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.523258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.523272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.533149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.533249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.533266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.533272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.533279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.533293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.543178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.543236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.543252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.543259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.543269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.543283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.553208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.553271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.553286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.553293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.553300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.553315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.563257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.563317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.563333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.563340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.563347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.563361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.573277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.573335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.573350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.573357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.573364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.573378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.583311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.583380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.583396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.583403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.583410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.583425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.593321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.593384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.593399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.593406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.593412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.593427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.603371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.603432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.603447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.603454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.603461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.603486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.613378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.613441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.613456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.613467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.613475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.613490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.623440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.623506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.623522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.623529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.623535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.623550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.633440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.633503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.633518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.633529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.633535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.633549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.643483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.643543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.643559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.643567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.643573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.643587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.653521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.653583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.653598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.653606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.653612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.653627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.509 [2024-05-15 03:18:57.663573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.509 [2024-05-15 03:18:57.663632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.509 [2024-05-15 03:18:57.663647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.509 [2024-05-15 03:18:57.663654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.509 [2024-05-15 03:18:57.663661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.509 [2024-05-15 03:18:57.663675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.509 qpair failed and we were unable to recover it. 00:24:26.769 [2024-05-15 03:18:57.673573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-05-15 03:18:57.673633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-05-15 03:18:57.673647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-05-15 03:18:57.673654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-05-15 03:18:57.673660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.769 [2024-05-15 03:18:57.673675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-05-15 03:18:57.683592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-05-15 03:18:57.683658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-05-15 03:18:57.683672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-05-15 03:18:57.683680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.769 [2024-05-15 03:18:57.683686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.769 [2024-05-15 03:18:57.683701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.769 qpair failed and we were unable to recover it. 00:24:26.769 [2024-05-15 03:18:57.693619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.769 [2024-05-15 03:18:57.693687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.769 [2024-05-15 03:18:57.693701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.769 [2024-05-15 03:18:57.693708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.693715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.693729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.703654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.703713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.703729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.703736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.703743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.703757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.713716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.713778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.713793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.713801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.713807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.713822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.723716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.723780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.723795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.723805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.723811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.723825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.733730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.733793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.733808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.733815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.733821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.733836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.743762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.743821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.743835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.743843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.743850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.743864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.753811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.753897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.753912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.753919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.753925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.753940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.763874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.763992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.764008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.764015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.764022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.764036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.773895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.774005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.774022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.774029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.774035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.774050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.784028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.784087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.784103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.784111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.784117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.784133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.793898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.793959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.793974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.793982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.793988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.794002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.803944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.804003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.804018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.804026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.804032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.804046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.813996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.814080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.814099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.814106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.814113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.814127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.823990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.824054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.824069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.770 [2024-05-15 03:18:57.824076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.770 [2024-05-15 03:18:57.824082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.770 [2024-05-15 03:18:57.824096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.770 qpair failed and we were unable to recover it. 00:24:26.770 [2024-05-15 03:18:57.834033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.770 [2024-05-15 03:18:57.834092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.770 [2024-05-15 03:18:57.834106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.834114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.834121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.834135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.843985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.844046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.844060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.844068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.844075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.844089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.854076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.854141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.854155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.854162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.854168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.854186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.864102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.864160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.864175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.864183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.864189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.864204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.874128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.874185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.874200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.874207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.874214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.874228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.884164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.884225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.884241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.884248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.884255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.884269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.894202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.894267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.894282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.894289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.894295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.894310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.904206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.904290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.904308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.904316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.904322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.904337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.914224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.914281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.914296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.914303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.914310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.914324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-05-15 03:18:57.924280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.771 [2024-05-15 03:18:57.924348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.771 [2024-05-15 03:18:57.924363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.771 [2024-05-15 03:18:57.924370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.771 [2024-05-15 03:18:57.924377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:26.771 [2024-05-15 03:18:57.924391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.934283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.934344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.934359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.934367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.934373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.031 [2024-05-15 03:18:57.934388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.944316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.944379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.944394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.944401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.944410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.031 [2024-05-15 03:18:57.944424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.954350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.954411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.954427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.954435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.954441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.031 [2024-05-15 03:18:57.954456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.964397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.964462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.964482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.964489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.964496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.031 [2024-05-15 03:18:57.964511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.974423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.974488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.974504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.974511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.974517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.031 [2024-05-15 03:18:57.974531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.031 qpair failed and we were unable to recover it. 00:24:27.031 [2024-05-15 03:18:57.984439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.031 [2024-05-15 03:18:57.984501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.031 [2024-05-15 03:18:57.984516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.031 [2024-05-15 03:18:57.984524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.031 [2024-05-15 03:18:57.984530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:57.984545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:57.994496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:57.994557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:57.994572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:57.994580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:57.994587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:57.994601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.004503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.004568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.004583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.004590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.004597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.004612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.014534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.014592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.014607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.014615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.014621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.014636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.024547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.024607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.024622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.024629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.024636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.024650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.034582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.034635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.034650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.034662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.034668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.034683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.044641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.044752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.044766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.044773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.044779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.044794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.054696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.054761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.054776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.054783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.054789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.054804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.064669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.064728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.064743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.064750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.064757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.064771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.074703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.074762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.074778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.074785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.074792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.074807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.084725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.084833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.084848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.084855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.084862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.084876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.094749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.094808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.094823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.094830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.094837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.094851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.104784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.104842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.032 [2024-05-15 03:18:58.104857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.032 [2024-05-15 03:18:58.104865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.032 [2024-05-15 03:18:58.104871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.032 [2024-05-15 03:18:58.104885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.032 qpair failed and we were unable to recover it. 00:24:27.032 [2024-05-15 03:18:58.114808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.032 [2024-05-15 03:18:58.114908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.114922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.114929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.114936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.114951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.124871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.124936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.124951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.124961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.124967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.124981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.134883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.134945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.134960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.134967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.134973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.134988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.144894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.144956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.144972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.144979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.144986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.145000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.154933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.154991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.155005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.155012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.155019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.155034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.164958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.165036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.165050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.165058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.165064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.165078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.174985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.175052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.175067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.175075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.175081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.175096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.033 [2024-05-15 03:18:58.185005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.033 [2024-05-15 03:18:58.185068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.033 [2024-05-15 03:18:58.185083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.033 [2024-05-15 03:18:58.185090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.033 [2024-05-15 03:18:58.185097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.033 [2024-05-15 03:18:58.185111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.033 qpair failed and we were unable to recover it. 00:24:27.293 [2024-05-15 03:18:58.195022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-05-15 03:18:58.195080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-05-15 03:18:58.195096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-05-15 03:18:58.195103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-05-15 03:18:58.195110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.293 [2024-05-15 03:18:58.195124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-05-15 03:18:58.205120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.293 [2024-05-15 03:18:58.205230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.293 [2024-05-15 03:18:58.205245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.293 [2024-05-15 03:18:58.205253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.293 [2024-05-15 03:18:58.205259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.293 [2024-05-15 03:18:58.205273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.293 qpair failed and we were unable to recover it. 00:24:27.293 [2024-05-15 03:18:58.215107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.215172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.215191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.215198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.215204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.215219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.225146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.225211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.225226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.225233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.225240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.225254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.235160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.235223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.235239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.235246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.235252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.235267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.245193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.245257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.245272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.245279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.245285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.245300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.255249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.255322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.255337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.255345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.255351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.255369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.265240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.265298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.265313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.265320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.265326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.265341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.275275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.275338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.275354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.275362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.275368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.275382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.285310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.285375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.285391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.285398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.285405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.285419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.295329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.295394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.295409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.295417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.295423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.295437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.305364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.305424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.305443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.305450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.305456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.305475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.315328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.315388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.315403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.315410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.315416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.315431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.325485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.325545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.325561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.325568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.325575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.325589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.335442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.335509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.335525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.335533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.335539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.335554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.294 qpair failed and we were unable to recover it. 00:24:27.294 [2024-05-15 03:18:58.345429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.294 [2024-05-15 03:18:58.345521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.294 [2024-05-15 03:18:58.345537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.294 [2024-05-15 03:18:58.345544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.294 [2024-05-15 03:18:58.345553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.294 [2024-05-15 03:18:58.345569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.355506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.355566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.355581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.355588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.355595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.355609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.365554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.365619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.365634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.365641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.365648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.365662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.375582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.375646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.375661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.375668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.375675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.375689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.385598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.385659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.385675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.385682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.385688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.385702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.395602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.395669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.395684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.395692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.395698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.395712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.405646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.405709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.405724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.405730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.405737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.405752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.415766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.415831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.415846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.415853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.415859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.415874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.425754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.425817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.425831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.425838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.425845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.425859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.435733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.435797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.435812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.435819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.435828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.435843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.295 [2024-05-15 03:18:58.445819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.295 [2024-05-15 03:18:58.445881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.295 [2024-05-15 03:18:58.445896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.295 [2024-05-15 03:18:58.445904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.295 [2024-05-15 03:18:58.445910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.295 [2024-05-15 03:18:58.445925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.295 qpair failed and we were unable to recover it. 00:24:27.555 [2024-05-15 03:18:58.455746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.555 [2024-05-15 03:18:58.455809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.555 [2024-05-15 03:18:58.455823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.555 [2024-05-15 03:18:58.455831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.555 [2024-05-15 03:18:58.455837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.555 [2024-05-15 03:18:58.455851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.555 qpair failed and we were unable to recover it. 00:24:27.555 [2024-05-15 03:18:58.465824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.555 [2024-05-15 03:18:58.465908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.555 [2024-05-15 03:18:58.465923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.555 [2024-05-15 03:18:58.465930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.555 [2024-05-15 03:18:58.465936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.555 [2024-05-15 03:18:58.465951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.555 qpair failed and we were unable to recover it. 00:24:27.555 [2024-05-15 03:18:58.475812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.555 [2024-05-15 03:18:58.475874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.555 [2024-05-15 03:18:58.475888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.555 [2024-05-15 03:18:58.475896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.555 [2024-05-15 03:18:58.475902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.555 [2024-05-15 03:18:58.475916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.555 qpair failed and we were unable to recover it. 00:24:27.555 [2024-05-15 03:18:58.485836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.485897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.485913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.485920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.485927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.485940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.495934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.496009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.496023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.496030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.496037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.496051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.505877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.505937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.505953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.505961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.505967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.505981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.515976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.516032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.516046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.516053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.516060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.516074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.526008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.526073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.526088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.526099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.526105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.526119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.535970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.536030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.536045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.536052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.536059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.536073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.546031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.546112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.546127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.546134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.546140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.546155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.556085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.556186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.556201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.556208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.556215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.556229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.566065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.566122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.566137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.566144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.566151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.566165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.576134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.576216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.576231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.576238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.576244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.576259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.586106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.586202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.586217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.586224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.586230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.586244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.596174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.596236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.596251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.596258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.596264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.596278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.606249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.606308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.606323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.606330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.606336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.606350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.556 [2024-05-15 03:18:58.616206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.556 [2024-05-15 03:18:58.616271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.556 [2024-05-15 03:18:58.616289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.556 [2024-05-15 03:18:58.616296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.556 [2024-05-15 03:18:58.616303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.556 [2024-05-15 03:18:58.616317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.556 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.626224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.626286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.626302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.626309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.626316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.626330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.636316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.636376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.636391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.636399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.636405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.636419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.646300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.646362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.646377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.646384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.646391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.646405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.656312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.656376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.656391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.656398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.656404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.656422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.666428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.666492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.666507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.666515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.666521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.666535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.676444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.676504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.676520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.676527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.676533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.676547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.686471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.686536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.686552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.686559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.686565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.686579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.696479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.696545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.696561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.696568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.696574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.696588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.557 [2024-05-15 03:18:58.706513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.557 [2024-05-15 03:18:58.706574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.557 [2024-05-15 03:18:58.706593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.557 [2024-05-15 03:18:58.706600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.557 [2024-05-15 03:18:58.706606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.557 [2024-05-15 03:18:58.706621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.557 qpair failed and we were unable to recover it. 00:24:27.817 [2024-05-15 03:18:58.716574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.817 [2024-05-15 03:18:58.716636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.817 [2024-05-15 03:18:58.716652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.817 [2024-05-15 03:18:58.716659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.817 [2024-05-15 03:18:58.716665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.817 [2024-05-15 03:18:58.716680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.817 qpair failed and we were unable to recover it. 00:24:27.817 [2024-05-15 03:18:58.726614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.817 [2024-05-15 03:18:58.726673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.726687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.726694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.726701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.726715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.736609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.736667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.736682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.736689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.736696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.736710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.746664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.746721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.746736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.746743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.746752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.746767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.756707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.756772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.756788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.756796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.756803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.756819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.766725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.766789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.766805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.766812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.766819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.766833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.776740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.776801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.776816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.776824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.776831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.776845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.786794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.786877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.786892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.786899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.786905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.786919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.796789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.796849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.796864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.796871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.796877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.796892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.806856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.806955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.806970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.806977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.806984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.806998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.816855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.816920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.816935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.816942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.816948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.816963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.826873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.826936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.826951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.826958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.826964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.826978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.836894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.836986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.837001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.837008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.837019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.837034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.846934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.846996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.847011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.847019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.818 [2024-05-15 03:18:58.847025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.818 [2024-05-15 03:18:58.847040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.818 qpair failed and we were unable to recover it. 00:24:27.818 [2024-05-15 03:18:58.856955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.818 [2024-05-15 03:18:58.857018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.818 [2024-05-15 03:18:58.857033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.818 [2024-05-15 03:18:58.857041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.857047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.857062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.866990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.867049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.867064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.867071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.867078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.867091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.877023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.877085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.877100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.877107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.877113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.877128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.887050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.887112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.887128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.887135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.887141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.887155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.897071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.897134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.897149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.897156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.897162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.897177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.907111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.907201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.907218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.907226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.907232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.907248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.917133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.917201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.917216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.917223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.917230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.917245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.927151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.927211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.927225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.927236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.927243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.927256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.937184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.937248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.937263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.937270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.937277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.937291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.947212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.947271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.947287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.947295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.947301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.947315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.957239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.957298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.957313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.957320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.957326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.957341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.967271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.967333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.967349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.967356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.967362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.967376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:27.819 [2024-05-15 03:18:58.977298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:27.819 [2024-05-15 03:18:58.977360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:27.819 [2024-05-15 03:18:58.977375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:27.819 [2024-05-15 03:18:58.977382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:27.819 [2024-05-15 03:18:58.977389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:27.819 [2024-05-15 03:18:58.977403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.819 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:58.987324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:58.987380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:58.987396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:58.987403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:58.987410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:58.987424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:58.997376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:58.997435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:58.997450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:58.997457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:58.997467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:58.997482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.007403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.007463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.007481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.007489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.007495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.007509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.017418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.017485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.017504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.017511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.017517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.017531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.027457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.027531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.027547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.027555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.027561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.027576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.037474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.037532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.037547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.037554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.037561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.037575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.047436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.047506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.047521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.047529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.047535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.047550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.079 [2024-05-15 03:18:59.057559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.079 [2024-05-15 03:18:59.057625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.079 [2024-05-15 03:18:59.057641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.079 [2024-05-15 03:18:59.057648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.079 [2024-05-15 03:18:59.057654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.079 [2024-05-15 03:18:59.057672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.079 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.067561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.067618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.067634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.067642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.067648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.067663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.077600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.077661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.077677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.077684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.077690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.077704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.087655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.087727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.087742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.087750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.087756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.087770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.097658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.097718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.097733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.097741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.097747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.097761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.107713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.107774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.107792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.107799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.107805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.107819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.117784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.117886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.117902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.117909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.117917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.117932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.127765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.127828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.127844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.127852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.127858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.127872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.137788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.137855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.137871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.137878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.137884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.137899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.147825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.147887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.147902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.147910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.147916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.147934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.157839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.157898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.157913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.157921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.157927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.157942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.167877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.167941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.167956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.167964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.167970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.167984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.177909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.178017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.178032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.178039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.178046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.178060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.187919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.187983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.187998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.188006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.188012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.080 [2024-05-15 03:18:59.188026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.080 qpair failed and we were unable to recover it. 00:24:28.080 [2024-05-15 03:18:59.197949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.080 [2024-05-15 03:18:59.198013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.080 [2024-05-15 03:18:59.198028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.080 [2024-05-15 03:18:59.198035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.080 [2024-05-15 03:18:59.198041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.081 [2024-05-15 03:18:59.198056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-05-15 03:18:59.207989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-05-15 03:18:59.208052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-05-15 03:18:59.208068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-05-15 03:18:59.208075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-05-15 03:18:59.208082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.081 [2024-05-15 03:18:59.208096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-05-15 03:18:59.218005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-05-15 03:18:59.218065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-05-15 03:18:59.218081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-05-15 03:18:59.218087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-05-15 03:18:59.218094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.081 [2024-05-15 03:18:59.218109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-05-15 03:18:59.228036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-05-15 03:18:59.228099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-05-15 03:18:59.228118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-05-15 03:18:59.228126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-05-15 03:18:59.228132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.081 [2024-05-15 03:18:59.228147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.081 [2024-05-15 03:18:59.238044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.081 [2024-05-15 03:18:59.238106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.081 [2024-05-15 03:18:59.238122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.081 [2024-05-15 03:18:59.238129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.081 [2024-05-15 03:18:59.238139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.081 [2024-05-15 03:18:59.238154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.081 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.248101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.248163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.248179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.248186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.248192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.248207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.258127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.258194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.258209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.258216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.258222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.258237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.268135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.268198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.268213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.268220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.268227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.268241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.278194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.278255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.278270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.278277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.278285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.278299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.288212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.288274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.288289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.288297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.288303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.288317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.298227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.298302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.298318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.298325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.298331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.298345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.308262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.308319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.308334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.308341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.308348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.308362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.318219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.340 [2024-05-15 03:18:59.318281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.340 [2024-05-15 03:18:59.318296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.340 [2024-05-15 03:18:59.318303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.340 [2024-05-15 03:18:59.318310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.340 [2024-05-15 03:18:59.318324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-05-15 03:18:59.328332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.328392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.328407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.328417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.328424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.328438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.338390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.338454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.338475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.338482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.338489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.338504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.348383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.348444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.348459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.348471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.348477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.348491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.358338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.358393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.358408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.358415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.358422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.358436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.368458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.368521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.368536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.368544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.368550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.368566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.378452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.378522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.378537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.378545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.378551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.378566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.388489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.388547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.388563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.388570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.388577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.388591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.398505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.398597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.398612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.398619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.398625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.398640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.408552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.408612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.408626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.408634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.408640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.408655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.418578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.418642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.418657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.418667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.418673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.418688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.428612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.428670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.428685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.428692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.428698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.428712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.438641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.438698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.438713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.438720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.438727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.438741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.448671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.448731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.448746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.448753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.448760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.448774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.458658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.458729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.458744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.458751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.458757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.458772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.468717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.468778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.468794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.468801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.468807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.468821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.478767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.478837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.478853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.478860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.478866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.478880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-05-15 03:18:59.488783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.341 [2024-05-15 03:18:59.488847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.341 [2024-05-15 03:18:59.488862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.341 [2024-05-15 03:18:59.488870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.341 [2024-05-15 03:18:59.488877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.341 [2024-05-15 03:18:59.488891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.342 [2024-05-15 03:18:59.498794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.342 [2024-05-15 03:18:59.498858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.342 [2024-05-15 03:18:59.498873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.342 [2024-05-15 03:18:59.498881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.342 [2024-05-15 03:18:59.498887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.342 [2024-05-15 03:18:59.498901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.601 [2024-05-15 03:18:59.508830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.601 [2024-05-15 03:18:59.508890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.601 [2024-05-15 03:18:59.508911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.601 [2024-05-15 03:18:59.508919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.601 [2024-05-15 03:18:59.508924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.601 [2024-05-15 03:18:59.508939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.601 qpair failed and we were unable to recover it. 00:24:28.601 [2024-05-15 03:18:59.518870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.601 [2024-05-15 03:18:59.518926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.601 [2024-05-15 03:18:59.518942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.601 [2024-05-15 03:18:59.518949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.601 [2024-05-15 03:18:59.518956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.601 [2024-05-15 03:18:59.518970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.601 qpair failed and we were unable to recover it. 00:24:28.601 [2024-05-15 03:18:59.528909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.528971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.528986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.528993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.528999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.529014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.538926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.538987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.539003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.539011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.539017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.539031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.548952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.549018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.549034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.549041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.549047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.549066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.558992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.559047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.559062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.559069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.559076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.559090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.569032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.569092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.569107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.569115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.569121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.569135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.579045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.579101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.579116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.579123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.579130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.579144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.589088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.589149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.589164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.589171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.589177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.589191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.599102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.599159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.599178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.599185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.599191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.599205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.609138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.609200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.609215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.609222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.609228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.609243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.619158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.619219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.619234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.619242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.619248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.619263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.629183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.629241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.629255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.629263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.629270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.629284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.639190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.639252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.639267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.639275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.639285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.639299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.649309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.649369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.649384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.649392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.649399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.602 [2024-05-15 03:18:59.649413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.602 qpair failed and we were unable to recover it. 00:24:28.602 [2024-05-15 03:18:59.659264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.602 [2024-05-15 03:18:59.659326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.602 [2024-05-15 03:18:59.659343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.602 [2024-05-15 03:18:59.659350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.602 [2024-05-15 03:18:59.659356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.659371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.669294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.669356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.669372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.669379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.669385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.669400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.679316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.679374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.679389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.679397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.679403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.679417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.689335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.689402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.689418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.689425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.689431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.689445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.699374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.699437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.699452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.699459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.699471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.699487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.709443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.709508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.709524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.709531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.709538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.709553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.719442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.719509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.719524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.719531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.719537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.719552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.729478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.729539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.729554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.729566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.729572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.729586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.739429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.739530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.739546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.739553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.739559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.739575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.749537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.749599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.749614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.749622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.749628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.749643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.603 [2024-05-15 03:18:59.759498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.603 [2024-05-15 03:18:59.759560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.603 [2024-05-15 03:18:59.759575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.603 [2024-05-15 03:18:59.759582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.603 [2024-05-15 03:18:59.759588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.603 [2024-05-15 03:18:59.759603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.603 qpair failed and we were unable to recover it. 00:24:28.863 [2024-05-15 03:18:59.769599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.863 [2024-05-15 03:18:59.769662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.863 [2024-05-15 03:18:59.769678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.863 [2024-05-15 03:18:59.769685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.863 [2024-05-15 03:18:59.769691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.863 [2024-05-15 03:18:59.769705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.863 qpair failed and we were unable to recover it. 00:24:28.863 [2024-05-15 03:18:59.779592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.863 [2024-05-15 03:18:59.779659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.779675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.779682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.779688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.779703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.789612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.789674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.789690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.789697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.789704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.789719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.799665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.799729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.799745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.799752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.799758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.799773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.809698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.809759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.809775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.809782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.809789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.809803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.819667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.819732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.819747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.819757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.819763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.819778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.829692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.829753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.829768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.829775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.829781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.829795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.839765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.839829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.839844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.839851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.839858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.839872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.849741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.849800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.849815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.849822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.849829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.849844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.859893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.859976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.859991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.859999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.860005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.860019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.869906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.869972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.869988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.869995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.870001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.870016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.879926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.879989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.880004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.880011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.880018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.880032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.889870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.889929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.889944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.889951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.889957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.889971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.899950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.900007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.900022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.900029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.900036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.900050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.909992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.864 [2024-05-15 03:18:59.910056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.864 [2024-05-15 03:18:59.910075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.864 [2024-05-15 03:18:59.910082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.864 [2024-05-15 03:18:59.910088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.864 [2024-05-15 03:18:59.910103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.864 qpair failed and we were unable to recover it. 00:24:28.864 [2024-05-15 03:18:59.920020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.920123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.920139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.920146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.920152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.920168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.929997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.930083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.930099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.930107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.930113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.930128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.940016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.940074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.940089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.940096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.940102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.940116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.950050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.950109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.950124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.950131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.950137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.950156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.960151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.960211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.960226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.960234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.960240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.960254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.970115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.970177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.970192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.970200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.970206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.970220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.980234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.980296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.980311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.980319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.980325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.980339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:18:59.990222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:18:59.990281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:18:59.990296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:18:59.990303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:18:59.990309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:18:59.990325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:19:00.000249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:19:00.000312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:19:00.000331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:19:00.000338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:19:00.000344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:19:00.000359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:19:00.010261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:19:00.010320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:19:00.010335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:19:00.010343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:19:00.010350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:19:00.010365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:28.865 [2024-05-15 03:19:00.020338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:28.865 [2024-05-15 03:19:00.020408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:28.865 [2024-05-15 03:19:00.020425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:28.865 [2024-05-15 03:19:00.020433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:28.865 [2024-05-15 03:19:00.020439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:28.865 [2024-05-15 03:19:00.020455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.865 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.030346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.030405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.030422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.030430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.030437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.030452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.040431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.040542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.040558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.040565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.040575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.040590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.050450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.050537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.050562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.050575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.050584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.050606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.060445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.060525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.060548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.060559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.060567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.060587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.070475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.070554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.070574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.070585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.070595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.070616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.080517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.080603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.080623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.080634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.080644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.080665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.090550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.090636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.090658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.090670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.090681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.090704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.100496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.100566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.100585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.100593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.100600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.100617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.110564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.110626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.110644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.110651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.110658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.110674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.120634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.120694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.120710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.120718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.120724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.120740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.130653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.126 [2024-05-15 03:19:00.130715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.126 [2024-05-15 03:19:00.130731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.126 [2024-05-15 03:19:00.130739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.126 [2024-05-15 03:19:00.130748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.126 [2024-05-15 03:19:00.130764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.126 qpair failed and we were unable to recover it. 00:24:29.126 [2024-05-15 03:19:00.140667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.140733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.140749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.140757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.140763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.140778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.150679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.150738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.150753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.150761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.150767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.150781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.160732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.160794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.160809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.160817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.160823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.160837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.170777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.170839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.170855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.170862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.170869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.170883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.180788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.180854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.180870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.180877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.180883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.180898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.190841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.190908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.190923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.190930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.190937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.190951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.200777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.200835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.200850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.200858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.200865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.200880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.210878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.210936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.210951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.210958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.210965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.210980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.220889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.220950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.220964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.220975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.220981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.220995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.230872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.230932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.230947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.230955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.230961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.230976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.240966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.241025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.241040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.241048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.241054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.241068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.250980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.251045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.251060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.251068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.251074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.251089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.261006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.261067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.261082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.261090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.261096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.261111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.127 qpair failed and we were unable to recover it. 00:24:29.127 [2024-05-15 03:19:00.271032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.127 [2024-05-15 03:19:00.271091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.127 [2024-05-15 03:19:00.271106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.127 [2024-05-15 03:19:00.271113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.127 [2024-05-15 03:19:00.271120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.127 [2024-05-15 03:19:00.271133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.128 qpair failed and we were unable to recover it. 00:24:29.128 [2024-05-15 03:19:00.281066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.128 [2024-05-15 03:19:00.281127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.128 [2024-05-15 03:19:00.281143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.128 [2024-05-15 03:19:00.281150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.128 [2024-05-15 03:19:00.281156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.128 [2024-05-15 03:19:00.281170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.128 qpair failed and we were unable to recover it. 00:24:29.387 [2024-05-15 03:19:00.291091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.387 [2024-05-15 03:19:00.291152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.387 [2024-05-15 03:19:00.291168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.387 [2024-05-15 03:19:00.291175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.387 [2024-05-15 03:19:00.291182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.387 [2024-05-15 03:19:00.291196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-05-15 03:19:00.301115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.387 [2024-05-15 03:19:00.301177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.387 [2024-05-15 03:19:00.301193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.387 [2024-05-15 03:19:00.301200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.387 [2024-05-15 03:19:00.301206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.387 [2024-05-15 03:19:00.301221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-05-15 03:19:00.311150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.387 [2024-05-15 03:19:00.311207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.387 [2024-05-15 03:19:00.311225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.387 [2024-05-15 03:19:00.311233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.387 [2024-05-15 03:19:00.311239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.387 [2024-05-15 03:19:00.311253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.321178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.321237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.321253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.321260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.321267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.321281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.331208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.331266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.331281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.331289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.331295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.331309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.341217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.341279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.341295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.341302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.341308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.341322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.351258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.351319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.351335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.351342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.351348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.351365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.361302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.361356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.361371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.361378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.361385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.361399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.371316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.371378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.371394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.371402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.371408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.371423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.381332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.381389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.381405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.381412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.381419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.381433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.391380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.391435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.391451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.391458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.391469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.391484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.401407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.401468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.401487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.401494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.401501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.401515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.411377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.411442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.411457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.411469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.411476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.411490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.421454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.421518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.421534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.421542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.421548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.421563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.431417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.431482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.431497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.431504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.431510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.431525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.441546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.441616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.441632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.441639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.441649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.388 [2024-05-15 03:19:00.441663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-05-15 03:19:00.451556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.388 [2024-05-15 03:19:00.451616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.388 [2024-05-15 03:19:00.451630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.388 [2024-05-15 03:19:00.451637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.388 [2024-05-15 03:19:00.451644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.451659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.461570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.461626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.461642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.461650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.461656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.461671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.471646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.471706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.471721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.471729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.471735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.471750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.481607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.481680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.481696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.481703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.481709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.481724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.491706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.491771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.491785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.491792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.491799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.491813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.501698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.501767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.501782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.501789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.501796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.501810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.511722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.511785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.511800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.511807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.511814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.511828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.521750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.521813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.521828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.521836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.521843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.521857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.531786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.531844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.531860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.531867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.531876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.531891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-05-15 03:19:00.541800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.389 [2024-05-15 03:19:00.541868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.389 [2024-05-15 03:19:00.541884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.389 [2024-05-15 03:19:00.541890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.389 [2024-05-15 03:19:00.541897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.389 [2024-05-15 03:19:00.541910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.551805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.551861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.551875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.551883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.551890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.551904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.561903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.561961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.561977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.561984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.561991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.562006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.571921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.572027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.572041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.572049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.572055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.572069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.581899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.581960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.581975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.581982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.581989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.582003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.591946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.592004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.592019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.592027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.592033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.592048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.601988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.602049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.602065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.602072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.602079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.602094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.612019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.612079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.612094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.612102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.612109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.612124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.622037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.622096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.622112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.622122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.622128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.622142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.632064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.632136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.632152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.632159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.632165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.632179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.642097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.642156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.642171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.642179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.642186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.642200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.652131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.652195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.650 [2024-05-15 03:19:00.652210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.650 [2024-05-15 03:19:00.652218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.650 [2024-05-15 03:19:00.652224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.650 [2024-05-15 03:19:00.652238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.650 qpair failed and we were unable to recover it. 00:24:29.650 [2024-05-15 03:19:00.662151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.650 [2024-05-15 03:19:00.662209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.662224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.662232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.662239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.662253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.672187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.672244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.672259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.672266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.672273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.672287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.682205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.682264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.682279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.682287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.682293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.682306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.692250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.692310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.692325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.692332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.692339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.692353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.702282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.702342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.702358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.702365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.702372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.702386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.712300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.712358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.712376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.712384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.712390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.712405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.722372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.722427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.722442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.722450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.722456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.722473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.732354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.732416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.732431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.732438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.732444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.732460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.742385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.742457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.742475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.742483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.742490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.742505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.752420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.752485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.752500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.752506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.752513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.752532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.762431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.762492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.762507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.762514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.762521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.762535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.772480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.772540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.772554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.772562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.772568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.772582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.782537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.782641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.782656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.782663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.782670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.782685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.792503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.651 [2024-05-15 03:19:00.792558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.651 [2024-05-15 03:19:00.792573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.651 [2024-05-15 03:19:00.792581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.651 [2024-05-15 03:19:00.792587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.651 [2024-05-15 03:19:00.792602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.651 qpair failed and we were unable to recover it. 00:24:29.651 [2024-05-15 03:19:00.802534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.652 [2024-05-15 03:19:00.802591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.652 [2024-05-15 03:19:00.802609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.652 [2024-05-15 03:19:00.802616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.652 [2024-05-15 03:19:00.802623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.652 [2024-05-15 03:19:00.802637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.652 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.812588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.812657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.812673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.812680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.812686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.812700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.822597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.822657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.822673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.822680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.822687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.822702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.832632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.832694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.832709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.832716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.832723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.832738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.842647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.842705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.842721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.842729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.842735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.842752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.852682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.852741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.852755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.852763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.852769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.852784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.862702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.862764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.912 [2024-05-15 03:19:00.862780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.912 [2024-05-15 03:19:00.862787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.912 [2024-05-15 03:19:00.862793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.912 [2024-05-15 03:19:00.862808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.912 qpair failed and we were unable to recover it. 00:24:29.912 [2024-05-15 03:19:00.872731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.912 [2024-05-15 03:19:00.872789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.872804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.872812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.872818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.872833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.882687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.882758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.882774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.882781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.882787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.882801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.892784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.892854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.892869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.892876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.892882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.892896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.902826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.902897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.902913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.902919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.902926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.902940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.912841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.912905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.912920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.912927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.912933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.912948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.922865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.922927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.922942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.922950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.922956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.922970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.932903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.932962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.932976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.932983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.932992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.933006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.942864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.942936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.942952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.942959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.942965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.942979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.952953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.953014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.953029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.953036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.953042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.953056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.962991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.963052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.963068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.963075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.963082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.963097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.973034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.973094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.973109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.973117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.973123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.973137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.983039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.983102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.983117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.983125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.983132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.983146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:00.993072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:00.993134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:00.993150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:00.993157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:00.993163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:00.993177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:01.003096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.913 [2024-05-15 03:19:01.003157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.913 [2024-05-15 03:19:01.003172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.913 [2024-05-15 03:19:01.003179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.913 [2024-05-15 03:19:01.003185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.913 [2024-05-15 03:19:01.003200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.913 qpair failed and we were unable to recover it. 00:24:29.913 [2024-05-15 03:19:01.013135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.013200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.013215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.013222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.013228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.013243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:29.914 [2024-05-15 03:19:01.023168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.023237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.023252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.023264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.023270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.023284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:29.914 [2024-05-15 03:19:01.033189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.033249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.033264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.033272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.033278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.033293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:29.914 [2024-05-15 03:19:01.043270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.043398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.043415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.043422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.043428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.043444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:29.914 [2024-05-15 03:19:01.053317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.053377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.053392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.053399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.053406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.053421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:29.914 [2024-05-15 03:19:01.063188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.914 [2024-05-15 03:19:01.063247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.914 [2024-05-15 03:19:01.063262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.914 [2024-05-15 03:19:01.063270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.914 [2024-05-15 03:19:01.063276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:29.914 [2024-05-15 03:19:01.063291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.914 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.073237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.073293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.073308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.073316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.073322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.073337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.083318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.083375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.083390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.083398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.083404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.083419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.093365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.093427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.093443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.093450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.093457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.093475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.103370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.103435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.103450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.103458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.103476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.103491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.113429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.113488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.113503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.113513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.113520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.113534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.123401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.123504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.123522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.123531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.123537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.123551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.133508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.133608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.133623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.133631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.133637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.133652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.143517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.143575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.143590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.143597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.143603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.143618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.153452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.153519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.153535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.153541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.153547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.153562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.163518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.174 [2024-05-15 03:19:01.163575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.174 [2024-05-15 03:19:01.163590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.174 [2024-05-15 03:19:01.163598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.174 [2024-05-15 03:19:01.163604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.174 [2024-05-15 03:19:01.163619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.174 qpair failed and we were unable to recover it. 00:24:30.174 [2024-05-15 03:19:01.173595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.173656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.173671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.173679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.173685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.173699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.183586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.183655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.183671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.183680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.183687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.183701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.193625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.193689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.193704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.193711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.193718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.193732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.203588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.203652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.203671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.203678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.203685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.203700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.213642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.213740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.213756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.213764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.213771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.213786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.223692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.223776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.223793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.223800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.223807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.223823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.233704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.233766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.233782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.233789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.233795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.233810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.243761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.243820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.243835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.243842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.243849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.243867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.253762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.253828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.253843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.253850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.253856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.253870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.263777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.263842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.263858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.263865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.263872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.263886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.273807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.273870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.273886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.273893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.273899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.273915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.283870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.283927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.283942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.283949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.283955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.283969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.293932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.293993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.294012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.294019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.294025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.294040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.175 [2024-05-15 03:19:01.303889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.175 [2024-05-15 03:19:01.303954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.175 [2024-05-15 03:19:01.303969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.175 [2024-05-15 03:19:01.303976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.175 [2024-05-15 03:19:01.303983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.175 [2024-05-15 03:19:01.303998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.175 qpair failed and we were unable to recover it. 00:24:30.176 [2024-05-15 03:19:01.314006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.176 [2024-05-15 03:19:01.314065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.176 [2024-05-15 03:19:01.314080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.176 [2024-05-15 03:19:01.314087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.176 [2024-05-15 03:19:01.314094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.176 [2024-05-15 03:19:01.314108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.176 qpair failed and we were unable to recover it. 00:24:30.176 [2024-05-15 03:19:01.324042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.176 [2024-05-15 03:19:01.324102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.176 [2024-05-15 03:19:01.324117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.176 [2024-05-15 03:19:01.324125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.176 [2024-05-15 03:19:01.324131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.176 [2024-05-15 03:19:01.324145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.176 qpair failed and we were unable to recover it. 00:24:30.176 [2024-05-15 03:19:01.334054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.176 [2024-05-15 03:19:01.334124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.176 [2024-05-15 03:19:01.334140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.176 [2024-05-15 03:19:01.334147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.176 [2024-05-15 03:19:01.334156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.176 [2024-05-15 03:19:01.334171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.176 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.343994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.344055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.344071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.344078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.344085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.344099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.354024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.354085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.354100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.354108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.354114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.354128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.364123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.364185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.364200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.364207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.364214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.364228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.374171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.374233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.374248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.374255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.374262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.374276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.384112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.384173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.384188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.384195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.384201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.384216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.394204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.435 [2024-05-15 03:19:01.394261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.435 [2024-05-15 03:19:01.394277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.435 [2024-05-15 03:19:01.394284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.435 [2024-05-15 03:19:01.394290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.435 [2024-05-15 03:19:01.394305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.435 qpair failed and we were unable to recover it. 00:24:30.435 [2024-05-15 03:19:01.404184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.404244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.404261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.404268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.404275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.436 [2024-05-15 03:19:01.404289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.414210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.414276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.414292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.414299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.414305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2004000b90 00:24:30.436 [2024-05-15 03:19:01.414320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.424287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.424355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.424377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.424388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.424395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1ffc000b90 00:24:30.436 [2024-05-15 03:19:01.424412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.434354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.434455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.434488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.434500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.434509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9f4c10 00:24:30.436 [2024-05-15 03:19:01.434532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.444299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.444362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.444380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.444387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.444393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9f4c10 00:24:30.436 [2024-05-15 03:19:01.444408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.454411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.454495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.454522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.454533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.454542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1ff4000b90 00:24:30.436 [2024-05-15 03:19:01.454564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.464405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.436 [2024-05-15 03:19:01.464471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.436 [2024-05-15 03:19:01.464488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.436 [2024-05-15 03:19:01.464495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.436 [2024-05-15 03:19:01.464501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1ff4000b90 00:24:30.436 [2024-05-15 03:19:01.464517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.436 qpair failed and we were unable to recover it. 00:24:30.436 [2024-05-15 03:19:01.464601] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:30.436 A controller has encountered a failure and is being reset. 00:24:30.436 [2024-05-15 03:19:01.464682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa02770 (9): Bad file descriptor 00:24:30.695 Controller properly reset. 00:24:30.695 Initializing NVMe Controllers 00:24:30.695 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:30.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:30.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:30.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:30.695 Initialization complete. Launching workers. 00:24:30.695 Starting thread on core 1 00:24:30.695 Starting thread on core 2 00:24:30.695 Starting thread on core 3 00:24:30.695 Starting thread on core 0 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:24:30.695 00:24:30.695 real 0m11.392s 00:24:30.695 user 0m21.488s 00:24:30.695 sys 0m4.348s 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.695 ************************************ 00:24:30.695 END TEST nvmf_target_disconnect_tc2 00:24:30.695 ************************************ 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.695 rmmod nvme_tcp 00:24:30.695 rmmod nvme_fabrics 00:24:30.695 rmmod nvme_keyring 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1173445 ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1173445 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1173445 ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1173445 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1173445 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1173445' 00:24:30.695 killing process with pid 1173445 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1173445 00:24:30.695 [2024-05-15 03:19:01.761280] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:30.695 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1173445 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.954 03:19:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.487 03:19:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.487 00:24:33.487 real 0m19.348s 00:24:33.487 user 0m48.950s 00:24:33.487 sys 0m8.664s 00:24:33.487 03:19:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.487 03:19:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:33.487 ************************************ 00:24:33.487 END TEST nvmf_target_disconnect 00:24:33.487 ************************************ 00:24:33.487 03:19:04 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:24:33.487 03:19:04 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.487 03:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.487 03:19:04 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:24:33.487 00:24:33.487 real 18m41.431s 00:24:33.487 user 40m45.630s 00:24:33.487 sys 5m57.063s 00:24:33.487 03:19:04 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:33.488 03:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.488 ************************************ 00:24:33.488 END TEST nvmf_tcp 00:24:33.488 ************************************ 00:24:33.488 03:19:04 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:24:33.488 03:19:04 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:33.488 03:19:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:33.488 03:19:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:33.488 03:19:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.488 ************************************ 00:24:33.488 START TEST spdkcli_nvmf_tcp 00:24:33.488 ************************************ 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:33.488 * Looking for test storage... 00:24:33.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1175156 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1175156 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1175156 ']' 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.488 03:19:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.488 [2024-05-15 03:19:04.379044] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:24:33.488 [2024-05-15 03:19:04.379098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175156 ] 00:24:33.488 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.488 [2024-05-15 03:19:04.435103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:33.488 [2024-05-15 03:19:04.508793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.488 [2024-05-15 03:19:04.508796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.056 03:19:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:34.056 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:34.056 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:34.056 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:34.056 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:34.056 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:34.056 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:34.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:34.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:34.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:34.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:34.056 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:34.056 ' 00:24:36.589 [2024-05-15 03:19:07.584141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.964 [2024-05-15 03:19:08.775797] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:37.964 [2024-05-15 03:19:08.776165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:40.494 [2024-05-15 03:19:11.063207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:42.397 [2024-05-15 03:19:13.101543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:43.771 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:43.771 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:43.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:43.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:43.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:43.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:43.771 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:43.771 03:19:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:44.029 03:19:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:44.029 03:19:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:44.029 03:19:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:44.029 03:19:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.029 03:19:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 03:19:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:44.288 03:19:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:44.288 03:19:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 03:19:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:44.288 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:44.288 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:44.288 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:44.288 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:44.288 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:44.288 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:44.288 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:44.288 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:44.288 ' 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:49.566 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:49.566 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:49.566 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:49.566 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1175156 ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1175156' 00:24:49.566 killing process with pid 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1175156 00:24:49.566 [2024-05-15 03:19:20.252750] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1175156 ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1175156 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1175156 ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1175156 00:24:49.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1175156) - No such process 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1175156 is not found' 00:24:49.566 Process with pid 1175156 is not found 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:49.566 00:24:49.566 real 0m16.258s 00:24:49.566 user 0m34.260s 00:24:49.566 sys 0m0.745s 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:49.566 03:19:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 ************************************ 00:24:49.566 END TEST spdkcli_nvmf_tcp 00:24:49.566 ************************************ 00:24:49.566 03:19:20 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:49.566 03:19:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:49.566 03:19:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:49.566 03:19:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 ************************************ 00:24:49.566 START TEST nvmf_identify_passthru 00:24:49.566 ************************************ 00:24:49.566 03:19:20 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:49.566 * Looking for test storage... 00:24:49.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.566 03:19:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.566 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.567 03:19:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.567 03:19:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:49.567 03:19:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.567 03:19:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.567 03:19:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:49.567 03:19:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.567 03:19:20 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.567 03:19:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.838 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.838 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.838 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:54.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:54.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:54.839 Found net devices under 0000:86:00.0: cvl_0_0 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:54.839 Found net devices under 0000:86:00.1: cvl_0_1 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:54.839 00:24:54.839 --- 10.0.0.2 ping statistics --- 00:24:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.839 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:24:54.839 00:24:54.839 --- 10.0.0.1 ping statistics --- 00:24:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.839 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.839 03:19:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.839 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:54.839 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:24:54.839 03:19:25 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:5e:00.0 00:24:54.839 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:24:54.839 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:24:54.840 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:24:54.840 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:54.840 03:19:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:54.840 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.030 03:19:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:24:59.030 03:19:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:24:59.030 03:19:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:59.030 03:19:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:59.030 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1182205 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.218 03:19:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1182205 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1182205 ']' 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.218 03:19:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:03.218 [2024-05-15 03:19:34.258877] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:25:03.218 [2024-05-15 03:19:34.258924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.218 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.218 [2024-05-15 03:19:34.315767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.477 [2024-05-15 03:19:34.397274] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.477 [2024-05-15 03:19:34.397309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.477 [2024-05-15 03:19:34.397316] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.477 [2024-05-15 03:19:34.397322] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.477 [2024-05-15 03:19:34.397327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.477 [2024-05-15 03:19:34.397359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.477 [2024-05-15 03:19:34.397456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.477 [2024-05-15 03:19:34.397559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.477 [2024-05-15 03:19:34.397562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:25:04.044 03:19:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.044 INFO: Log level set to 20 00:25:04.044 INFO: Requests: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "method": "nvmf_set_config", 00:25:04.044 "id": 1, 00:25:04.044 "params": { 00:25:04.044 "admin_cmd_passthru": { 00:25:04.044 "identify_ctrlr": true 00:25:04.044 } 00:25:04.044 } 00:25:04.044 } 00:25:04.044 00:25:04.044 INFO: response: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "id": 1, 00:25:04.044 "result": true 00:25:04.044 } 00:25:04.044 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.044 03:19:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.044 INFO: Setting log level to 20 00:25:04.044 INFO: Setting log level to 20 00:25:04.044 INFO: Log level set to 20 00:25:04.044 INFO: Log level set to 20 00:25:04.044 INFO: Requests: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "method": "framework_start_init", 00:25:04.044 "id": 1 00:25:04.044 } 00:25:04.044 00:25:04.044 INFO: Requests: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "method": "framework_start_init", 00:25:04.044 "id": 1 00:25:04.044 } 00:25:04.044 00:25:04.044 [2024-05-15 03:19:35.167360] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:04.044 INFO: response: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "id": 1, 00:25:04.044 "result": true 00:25:04.044 } 00:25:04.044 00:25:04.044 INFO: response: 00:25:04.044 { 00:25:04.044 "jsonrpc": "2.0", 00:25:04.044 "id": 1, 00:25:04.044 "result": true 00:25:04.044 } 00:25:04.044 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.044 03:19:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.044 INFO: Setting log level to 40 00:25:04.044 INFO: Setting log level to 40 00:25:04.044 INFO: Setting log level to 40 00:25:04.044 [2024-05-15 03:19:35.180745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.044 03:19:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.044 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 03:19:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:25:04.303 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.303 03:19:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.587 Nvme0n1 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.587 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.587 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.587 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.587 [2024-05-15 03:19:38.071725] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:07.587 [2024-05-15 03:19:38.071956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.587 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.587 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.587 [ 00:25:07.587 { 00:25:07.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:07.587 "subtype": "Discovery", 00:25:07.587 "listen_addresses": [], 00:25:07.587 "allow_any_host": true, 00:25:07.587 "hosts": [] 00:25:07.587 }, 00:25:07.587 { 00:25:07.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.587 "subtype": "NVMe", 00:25:07.587 "listen_addresses": [ 00:25:07.587 { 00:25:07.587 "trtype": "TCP", 00:25:07.587 "adrfam": "IPv4", 00:25:07.587 "traddr": "10.0.0.2", 00:25:07.587 "trsvcid": "4420" 00:25:07.587 } 00:25:07.587 ], 00:25:07.587 "allow_any_host": true, 00:25:07.587 "hosts": [], 00:25:07.587 "serial_number": "SPDK00000000000001", 00:25:07.587 "model_number": "SPDK bdev Controller", 00:25:07.587 "max_namespaces": 1, 00:25:07.587 "min_cntlid": 1, 00:25:07.587 "max_cntlid": 65519, 00:25:07.587 "namespaces": [ 00:25:07.587 { 00:25:07.587 "nsid": 1, 00:25:07.587 "bdev_name": "Nvme0n1", 00:25:07.587 "name": "Nvme0n1", 00:25:07.587 "nguid": "CC17A9A0FE094EAFA66E2D7358C1143C", 00:25:07.587 "uuid": "cc17a9a0-fe09-4eaf-a66e-2d7358c1143c" 00:25:07.587 } 00:25:07.588 ] 00:25:07.588 } 00:25:07.588 ] 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:07.588 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:07.588 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:07.588 03:19:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.588 rmmod nvme_tcp 00:25:07.588 rmmod nvme_fabrics 00:25:07.588 rmmod nvme_keyring 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1182205 ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1182205 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1182205 ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1182205 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1182205 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1182205' 00:25:07.588 killing process with pid 1182205 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1182205 00:25:07.588 [2024-05-15 03:19:38.539875] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:07.588 03:19:38 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1182205 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.962 03:19:40 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.962 03:19:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:08.962 03:19:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.501 03:19:42 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.501 00:25:11.501 real 0m21.598s 00:25:11.501 user 0m29.907s 00:25:11.501 sys 0m4.698s 00:25:11.501 03:19:42 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:11.501 03:19:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:11.501 ************************************ 00:25:11.501 END TEST nvmf_identify_passthru 00:25:11.501 ************************************ 00:25:11.501 03:19:42 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:11.501 03:19:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:11.501 03:19:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:11.501 03:19:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.501 ************************************ 00:25:11.501 START TEST nvmf_dif 00:25:11.501 ************************************ 00:25:11.501 03:19:42 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:11.501 * Looking for test storage... 00:25:11.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.501 03:19:42 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.501 03:19:42 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.501 03:19:42 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.501 03:19:42 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.501 03:19:42 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.501 03:19:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.501 03:19:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.501 03:19:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.501 03:19:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:11.502 03:19:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.502 03:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:11.502 03:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:11.502 03:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:11.502 03:19:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:11.502 03:19:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.502 03:19:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:11.502 03:19:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.502 03:19:42 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.502 03:19:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.770 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.770 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.770 03:19:46 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:25:16.770 00:25:16.770 --- 10.0.0.2 ping statistics --- 00:25:16.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.770 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:16.770 00:25:16.770 --- 10.0.0.1 ping statistics --- 00:25:16.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.770 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:16.770 03:19:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:18.231 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:18.231 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:18.231 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:18.232 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:18.232 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:18.232 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:18.492 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:18.492 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:18.492 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:18.492 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.492 03:19:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:18.492 03:19:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1187489 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1187489 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1187489 ']' 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.492 03:19:49 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:18.492 03:19:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 [2024-05-15 03:19:49.617550] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:25:18.492 [2024-05-15 03:19:49.617598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.492 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.751 [2024-05-15 03:19:49.676041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.751 [2024-05-15 03:19:49.755440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.751 [2024-05-15 03:19:49.755481] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.751 [2024-05-15 03:19:49.755489] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.751 [2024-05-15 03:19:49.755495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.751 [2024-05-15 03:19:49.755500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.751 [2024-05-15 03:19:49.755522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:25:19.319 03:19:50 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:19.319 03:19:50 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.319 03:19:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:19.319 03:19:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:19.319 [2024-05-15 03:19:50.446756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.319 03:19:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:19.319 03:19:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:19.579 ************************************ 00:25:19.579 START TEST fio_dif_1_default 00:25:19.579 ************************************ 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:19.579 bdev_null0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:19.579 [2024-05-15 03:19:50.518886] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:19.579 [2024-05-15 03:19:50.519067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.579 { 00:25:19.579 "params": { 00:25:19.579 "name": "Nvme$subsystem", 00:25:19.579 "trtype": "$TEST_TRANSPORT", 00:25:19.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.579 "adrfam": "ipv4", 00:25:19.579 "trsvcid": "$NVMF_PORT", 00:25:19.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.579 "hdgst": ${hdgst:-false}, 00:25:19.579 "ddgst": ${ddgst:-false} 00:25:19.579 }, 00:25:19.579 "method": "bdev_nvme_attach_controller" 00:25:19.579 } 00:25:19.579 EOF 00:25:19.579 )") 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:19.579 03:19:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:19.579 "params": { 00:25:19.579 "name": "Nvme0", 00:25:19.579 "trtype": "tcp", 00:25:19.579 "traddr": "10.0.0.2", 00:25:19.579 "adrfam": "ipv4", 00:25:19.579 "trsvcid": "4420", 00:25:19.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:19.579 "hdgst": false, 00:25:19.579 "ddgst": false 00:25:19.580 }, 00:25:19.580 "method": "bdev_nvme_attach_controller" 00:25:19.580 }' 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:19.580 03:19:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.838 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:19.838 fio-3.35 00:25:19.838 Starting 1 thread 00:25:19.838 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.045 00:25:32.045 filename0: (groupid=0, jobs=1): err= 0: pid=1188037: Wed May 15 03:20:01 2024 00:25:32.045 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:25:32.045 slat (nsec): min=6028, max=25718, avg=6297.30, stdev=873.64 00:25:32.045 clat (usec): min=493, max=46741, avg=21047.26, stdev=20429.65 00:25:32.045 lat (usec): min=499, max=46767, avg=21053.55, stdev=20429.62 00:25:32.045 clat percentiles (usec): 00:25:32.045 | 1.00th=[ 506], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 553], 00:25:32.045 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[41157], 60.00th=[41157], 00:25:32.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:25:32.045 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:25:32.045 | 99.99th=[46924] 00:25:32.045 bw ( KiB/s): min= 704, max= 768, per=99.80%, avg=758.40, stdev=23.45, samples=20 00:25:32.045 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:25:32.045 lat (usec) : 500=0.53%, 750=49.16%, 1000=0.21% 00:25:32.045 lat (msec) : 50=50.11% 00:25:32.045 cpu : usr=94.83%, sys=4.93%, ctx=14, majf=0, minf=206 00:25:32.045 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:32.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.045 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.045 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:32.045 00:25:32.045 Run status group 0 (all jobs): 00:25:32.045 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10006-10006msec 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.045 00:25:32.045 real 0m11.154s 00:25:32.045 user 0m15.754s 00:25:32.045 sys 0m0.744s 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 ************************************ 00:25:32.045 END TEST fio_dif_1_default 00:25:32.045 ************************************ 00:25:32.045 03:20:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:32.045 03:20:01 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:32.045 03:20:01 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 ************************************ 00:25:32.045 START TEST fio_dif_1_multi_subsystems 00:25:32.045 ************************************ 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 bdev_null0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.045 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.046 [2024-05-15 03:20:01.743638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.046 bdev_null1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:32.046 { 00:25:32.046 "params": { 00:25:32.046 "name": "Nvme$subsystem", 00:25:32.046 "trtype": "$TEST_TRANSPORT", 00:25:32.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.046 "adrfam": "ipv4", 00:25:32.046 "trsvcid": "$NVMF_PORT", 00:25:32.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.046 "hdgst": ${hdgst:-false}, 00:25:32.046 "ddgst": ${ddgst:-false} 00:25:32.046 }, 00:25:32.046 "method": "bdev_nvme_attach_controller" 00:25:32.046 } 00:25:32.046 EOF 00:25:32.046 )") 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:32.046 { 00:25:32.046 "params": { 00:25:32.046 "name": "Nvme$subsystem", 00:25:32.046 "trtype": "$TEST_TRANSPORT", 00:25:32.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.046 "adrfam": "ipv4", 00:25:32.046 "trsvcid": "$NVMF_PORT", 00:25:32.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.046 "hdgst": ${hdgst:-false}, 00:25:32.046 "ddgst": ${ddgst:-false} 00:25:32.046 }, 00:25:32.046 "method": "bdev_nvme_attach_controller" 00:25:32.046 } 00:25:32.046 EOF 00:25:32.046 )") 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:32.046 "params": { 00:25:32.046 "name": "Nvme0", 00:25:32.046 "trtype": "tcp", 00:25:32.046 "traddr": "10.0.0.2", 00:25:32.046 "adrfam": "ipv4", 00:25:32.046 "trsvcid": "4420", 00:25:32.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:32.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:32.046 "hdgst": false, 00:25:32.046 "ddgst": false 00:25:32.046 }, 00:25:32.046 "method": "bdev_nvme_attach_controller" 00:25:32.046 },{ 00:25:32.046 "params": { 00:25:32.046 "name": "Nvme1", 00:25:32.046 "trtype": "tcp", 00:25:32.046 "traddr": "10.0.0.2", 00:25:32.046 "adrfam": "ipv4", 00:25:32.046 "trsvcid": "4420", 00:25:32.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.046 "hdgst": false, 00:25:32.046 "ddgst": false 00:25:32.046 }, 00:25:32.046 "method": "bdev_nvme_attach_controller" 00:25:32.046 }' 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:32.046 03:20:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.046 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:32.046 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:32.046 fio-3.35 00:25:32.046 Starting 2 threads 00:25:32.046 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.017 00:25:42.017 filename0: (groupid=0, jobs=1): err= 0: pid=1190004: Wed May 15 03:20:12 2024 00:25:42.017 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10016msec) 00:25:42.017 slat (nsec): min=6284, max=37115, avg=8116.25, stdev=2913.69 00:25:42.017 clat (usec): min=40851, max=42087, avg=41360.05, stdev=484.57 00:25:42.017 lat (usec): min=40858, max=42099, avg=41368.17, stdev=484.59 00:25:42.017 clat percentiles (usec): 00:25:42.017 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:42.017 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:42.017 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:42.017 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:42.017 | 99.99th=[42206] 00:25:42.017 bw ( KiB/s): min= 384, max= 416, per=33.61%, avg=385.60, stdev= 7.16, samples=20 00:25:42.017 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:25:42.017 lat (msec) : 50=100.00% 00:25:42.017 cpu : usr=97.65%, sys=2.06%, ctx=29, majf=0, minf=159 00:25:42.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.017 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:42.017 filename1: (groupid=0, jobs=1): err= 0: pid=1190005: Wed May 15 03:20:12 2024 00:25:42.017 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10011msec) 00:25:42.017 slat (nsec): min=6328, max=35954, avg=7373.40, stdev=2219.11 00:25:42.017 clat (usec): min=486, max=42760, avg=21054.12, stdev=20494.67 00:25:42.017 lat (usec): min=492, max=42787, avg=21061.49, stdev=20493.96 00:25:42.017 clat percentiles (usec): 00:25:42.017 | 1.00th=[ 490], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 515], 00:25:42.017 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[41157], 60.00th=[41157], 00:25:42.017 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:25:42.017 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:25:42.017 | 99.99th=[42730] 00:25:42.017 bw ( KiB/s): min= 704, max= 768, per=66.18%, avg=758.45, stdev=23.32, samples=20 00:25:42.017 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:25:42.017 lat (usec) : 500=6.84%, 750=42.89%, 1000=0.16% 00:25:42.017 lat (msec) : 50=50.11% 00:25:42.017 cpu : usr=97.80%, sys=1.94%, ctx=13, majf=0, minf=115 00:25:42.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.017 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:42.017 00:25:42.017 Run status group 0 (all jobs): 00:25:42.017 READ: bw=1145KiB/s (1173kB/s), 387KiB/s-759KiB/s (396kB/s-777kB/s), io=11.2MiB (11.7MB), run=10011-10016msec 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.017 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 00:25:42.276 real 0m11.493s 00:25:42.276 user 0m26.341s 00:25:42.276 sys 0m0.755s 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 ************************************ 00:25:42.276 END TEST fio_dif_1_multi_subsystems 00:25:42.276 ************************************ 00:25:42.276 03:20:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:42.276 03:20:13 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:42.276 03:20:13 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 ************************************ 00:25:42.276 START TEST fio_dif_rand_params 00:25:42.276 ************************************ 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 bdev_null0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:42.276 [2024-05-15 03:20:13.308966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:42.276 { 00:25:42.276 "params": { 00:25:42.276 "name": "Nvme$subsystem", 00:25:42.276 "trtype": "$TEST_TRANSPORT", 00:25:42.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.276 "adrfam": "ipv4", 00:25:42.276 "trsvcid": "$NVMF_PORT", 00:25:42.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.276 "hdgst": ${hdgst:-false}, 00:25:42.276 "ddgst": ${ddgst:-false} 00:25:42.276 }, 00:25:42.276 "method": "bdev_nvme_attach_controller" 00:25:42.276 } 00:25:42.276 EOF 00:25:42.276 )") 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:42.276 "params": { 00:25:42.276 "name": "Nvme0", 00:25:42.276 "trtype": "tcp", 00:25:42.276 "traddr": "10.0.0.2", 00:25:42.276 "adrfam": "ipv4", 00:25:42.276 "trsvcid": "4420", 00:25:42.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:42.276 "hdgst": false, 00:25:42.276 "ddgst": false 00:25:42.276 }, 00:25:42.276 "method": "bdev_nvme_attach_controller" 00:25:42.276 }' 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:42.276 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:42.277 03:20:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.535 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:42.535 ... 00:25:42.535 fio-3.35 00:25:42.535 Starting 3 threads 00:25:42.535 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.209 00:25:49.209 filename0: (groupid=0, jobs=1): err= 0: pid=1191970: Wed May 15 03:20:19 2024 00:25:49.209 read: IOPS=323, BW=40.5MiB/s (42.5MB/s)(204MiB/5044msec) 00:25:49.209 slat (nsec): min=6519, max=40869, avg=16911.31, stdev=8270.76 00:25:49.209 clat (usec): min=3514, max=50096, avg=9214.33, stdev=9103.59 00:25:49.209 lat (usec): min=3522, max=50118, avg=9231.25, stdev=9103.94 00:25:49.209 clat percentiles (usec): 00:25:49.209 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 5538], 00:25:49.209 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7832], 00:25:49.209 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[44827], 00:25:49.209 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:25:49.209 | 99.99th=[50070] 00:25:49.209 bw ( KiB/s): min=20480, max=51200, per=38.44%, avg=41760.50, stdev=8879.73, samples=10 00:25:49.209 iops : min= 160, max= 400, avg=326.20, stdev=69.42, samples=10 00:25:49.209 lat (msec) : 4=0.73%, 10=87.15%, 20=7.04%, 50=5.02%, 100=0.06% 00:25:49.209 cpu : usr=96.53%, sys=3.11%, ctx=27, majf=0, minf=52 00:25:49.209 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 issued rwts: total=1634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:49.209 filename0: (groupid=0, jobs=1): err= 0: pid=1191971: Wed May 15 03:20:19 2024 00:25:49.209 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5003msec) 00:25:49.209 slat (nsec): min=6398, max=51466, avg=15884.46, stdev=9006.34 00:25:49.209 clat (usec): min=3698, max=52190, avg=11589.19, stdev=11909.57 00:25:49.209 lat (usec): min=3705, max=52203, avg=11605.07, stdev=11909.79 00:25:49.209 clat percentiles (usec): 00:25:49.209 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 6325], 00:25:49.209 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 8029], 60.00th=[ 8979], 00:25:49.209 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12256], 95.00th=[47973], 00:25:49.209 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:25:49.209 | 99.99th=[52167] 00:25:49.209 bw ( KiB/s): min=20992, max=48896, per=30.40%, avg=33024.00, stdev=7643.89, samples=10 00:25:49.209 iops : min= 164, max= 382, avg=258.00, stdev=59.72, samples=10 00:25:49.209 lat (msec) : 4=0.54%, 10=73.78%, 20=16.40%, 50=8.51%, 100=0.77% 00:25:49.209 cpu : usr=96.46%, sys=3.24%, ctx=11, majf=0, minf=99 00:25:49.209 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:49.209 filename0: (groupid=0, jobs=1): err= 0: pid=1191972: Wed May 15 03:20:19 2024 00:25:49.209 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(169MiB/5005msec) 00:25:49.209 slat (nsec): min=6335, max=51509, avg=15489.52, stdev=9143.58 00:25:49.209 clat (usec): min=3954, max=52314, avg=11069.17, stdev=11374.40 00:25:49.209 lat (usec): min=3961, max=52340, avg=11084.66, stdev=11375.25 00:25:49.209 clat percentiles (usec): 00:25:49.209 | 1.00th=[ 4146], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 6194], 00:25:49.209 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8848], 00:25:49.209 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11994], 95.00th=[47973], 00:25:49.209 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:25:49.209 | 99.99th=[52167] 00:25:49.209 bw ( KiB/s): min=22016, max=46941, per=31.84%, avg=34594.90, stdev=6934.52, samples=10 00:25:49.209 iops : min= 172, max= 366, avg=270.20, stdev=54.03, samples=10 00:25:49.209 lat (msec) : 4=0.15%, 10=75.55%, 20=16.10%, 50=7.31%, 100=0.89% 00:25:49.209 cpu : usr=96.42%, sys=3.26%, ctx=10, majf=0, minf=167 00:25:49.209 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:49.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.209 issued rwts: total=1354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:49.209 00:25:49.209 Run status group 0 (all jobs): 00:25:49.209 READ: bw=106MiB/s (111MB/s), 32.3MiB/s-40.5MiB/s (33.9MB/s-42.5MB/s), io=535MiB (561MB), run=5003-5044msec 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:49.209 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 bdev_null0 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 [2024-05-15 03:20:19.422744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 bdev_null1 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 bdev_null2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:49.210 { 00:25:49.210 "params": { 00:25:49.210 "name": "Nvme$subsystem", 00:25:49.210 "trtype": "$TEST_TRANSPORT", 00:25:49.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.210 "adrfam": "ipv4", 00:25:49.210 "trsvcid": "$NVMF_PORT", 00:25:49.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.210 "hdgst": ${hdgst:-false}, 00:25:49.210 "ddgst": ${ddgst:-false} 00:25:49.210 }, 00:25:49.210 "method": "bdev_nvme_attach_controller" 00:25:49.210 } 00:25:49.210 EOF 00:25:49.210 )") 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:49.210 { 00:25:49.210 "params": { 00:25:49.210 "name": "Nvme$subsystem", 00:25:49.210 "trtype": "$TEST_TRANSPORT", 00:25:49.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.210 "adrfam": "ipv4", 00:25:49.210 "trsvcid": "$NVMF_PORT", 00:25:49.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.210 "hdgst": ${hdgst:-false}, 00:25:49.210 "ddgst": ${ddgst:-false} 00:25:49.210 }, 00:25:49.210 "method": "bdev_nvme_attach_controller" 00:25:49.210 } 00:25:49.210 EOF 00:25:49.210 )") 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:49.210 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:49.210 { 00:25:49.210 "params": { 00:25:49.210 "name": "Nvme$subsystem", 00:25:49.210 "trtype": "$TEST_TRANSPORT", 00:25:49.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.211 "adrfam": "ipv4", 00:25:49.211 "trsvcid": "$NVMF_PORT", 00:25:49.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.211 "hdgst": ${hdgst:-false}, 00:25:49.211 "ddgst": ${ddgst:-false} 00:25:49.211 }, 00:25:49.211 "method": "bdev_nvme_attach_controller" 00:25:49.211 } 00:25:49.211 EOF 00:25:49.211 )") 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:49.211 "params": { 00:25:49.211 "name": "Nvme0", 00:25:49.211 "trtype": "tcp", 00:25:49.211 "traddr": "10.0.0.2", 00:25:49.211 "adrfam": "ipv4", 00:25:49.211 "trsvcid": "4420", 00:25:49.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:49.211 "hdgst": false, 00:25:49.211 "ddgst": false 00:25:49.211 }, 00:25:49.211 "method": "bdev_nvme_attach_controller" 00:25:49.211 },{ 00:25:49.211 "params": { 00:25:49.211 "name": "Nvme1", 00:25:49.211 "trtype": "tcp", 00:25:49.211 "traddr": "10.0.0.2", 00:25:49.211 "adrfam": "ipv4", 00:25:49.211 "trsvcid": "4420", 00:25:49.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.211 "hdgst": false, 00:25:49.211 "ddgst": false 00:25:49.211 }, 00:25:49.211 "method": "bdev_nvme_attach_controller" 00:25:49.211 },{ 00:25:49.211 "params": { 00:25:49.211 "name": "Nvme2", 00:25:49.211 "trtype": "tcp", 00:25:49.211 "traddr": "10.0.0.2", 00:25:49.211 "adrfam": "ipv4", 00:25:49.211 "trsvcid": "4420", 00:25:49.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:49.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:49.211 "hdgst": false, 00:25:49.211 "ddgst": false 00:25:49.211 }, 00:25:49.211 "method": "bdev_nvme_attach_controller" 00:25:49.211 }' 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:49.211 03:20:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.211 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.211 ... 00:25:49.211 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.211 ... 00:25:49.211 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.211 ... 00:25:49.211 fio-3.35 00:25:49.211 Starting 24 threads 00:25:49.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.415 00:26:01.415 filename0: (groupid=0, jobs=1): err= 0: pid=1193184: Wed May 15 03:20:31 2024 00:26:01.415 read: IOPS=443, BW=1774KiB/s (1816kB/s)(17.3MiB/10009msec) 00:26:01.415 slat (nsec): min=7014, max=63941, avg=17739.91, stdev=7919.56 00:26:01.415 clat (msec): min=15, max=370, avg=35.95, stdev=43.07 00:26:01.415 lat (msec): min=15, max=370, avg=35.96, stdev=43.06 00:26:01.415 clat percentiles (msec): 00:26:01.415 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.415 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.415 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.415 | 99.00th=[ 268], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 326], 00:26:01.415 | 99.99th=[ 372] 00:26:01.415 bw ( KiB/s): min= 128, max= 2308, per=4.10%, avg=1747.53, stdev=932.60, samples=19 00:26:01.415 iops : min= 32, max= 577, avg=436.84, stdev=233.22, samples=19 00:26:01.415 lat (msec) : 20=0.23%, 50=96.17%, 250=0.99%, 500=2.61% 00:26:01.415 cpu : usr=98.80%, sys=0.82%, ctx=60, majf=0, minf=36 00:26:01.415 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.415 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.415 issued rwts: total=4438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.415 filename0: (groupid=0, jobs=1): err= 0: pid=1193185: Wed May 15 03:20:31 2024 00:26:01.415 read: IOPS=446, BW=1788KiB/s (1831kB/s)(17.5MiB/10023msec) 00:26:01.415 slat (nsec): min=4307, max=73272, avg=21588.15, stdev=17312.37 00:26:01.415 clat (msec): min=18, max=277, avg=35.61, stdev=39.43 00:26:01.415 lat (msec): min=18, max=277, avg=35.63, stdev=39.43 00:26:01.415 clat percentiles (msec): 00:26:01.415 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.415 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.415 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 30], 00:26:01.415 | 99.00th=[ 259], 99.50th=[ 262], 99.90th=[ 279], 99.95th=[ 279], 00:26:01.416 | 99.99th=[ 279] 00:26:01.416 bw ( KiB/s): min= 240, max= 2304, per=4.19%, avg=1785.35, stdev=881.04, samples=20 00:26:01.416 iops : min= 60, max= 576, avg=446.30, stdev=220.24, samples=20 00:26:01.416 lat (msec) : 20=0.36%, 50=95.40%, 100=0.36%, 250=2.72%, 500=1.16% 00:26:01.416 cpu : usr=98.92%, sys=0.70%, ctx=16, majf=0, minf=110 00:26:01.416 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193187: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=441, BW=1766KiB/s (1808kB/s)(17.2MiB/10004msec) 00:26:01.416 slat (nsec): min=6867, max=50770, avg=22112.43, stdev=7065.99 00:26:01.416 clat (msec): min=5, max=667, avg=36.04, stdev=57.82 00:26:01.416 lat (msec): min=5, max=667, avg=36.06, stdev=57.82 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 376], 99.50th=[ 422], 99.90th=[ 667], 99.95th=[ 667], 00:26:01.416 | 99.99th=[ 667] 00:26:01.416 bw ( KiB/s): min= 128, max= 2304, per=4.27%, avg=1820.44, stdev=897.13, samples=18 00:26:01.416 iops : min= 32, max= 576, avg=455.11, stdev=224.28, samples=18 00:26:01.416 lat (msec) : 10=0.36%, 20=0.36%, 50=96.74%, 250=0.72%, 500=1.45% 00:26:01.416 lat (msec) : 750=0.36% 00:26:01.416 cpu : usr=98.91%, sys=0.71%, ctx=7, majf=0, minf=39 00:26:01.416 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193188: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.2MiB/10012msec) 00:26:01.416 slat (nsec): min=7044, max=86568, avg=42906.51, stdev=21438.65 00:26:01.416 clat (msec): min=21, max=511, avg=35.94, stdev=48.86 00:26:01.416 lat (msec): min=22, max=511, avg=35.98, stdev=48.86 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 271], 99.50th=[ 359], 99.90th=[ 514], 99.95th=[ 514], 00:26:01.416 | 99.99th=[ 514] 00:26:01.416 bw ( KiB/s): min= 240, max= 2304, per=4.35%, avg=1852.63, stdev=851.34, samples=19 00:26:01.416 iops : min= 60, max= 576, avg=463.16, stdev=212.84, samples=19 00:26:01.416 lat (msec) : 50=96.78%, 250=1.00%, 500=1.86%, 750=0.36% 00:26:01.416 cpu : usr=98.92%, sys=0.70%, ctx=13, majf=0, minf=45 00:26:01.416 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193189: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=444, BW=1778KiB/s (1821kB/s)(17.4MiB/10007msec) 00:26:01.416 slat (nsec): min=5274, max=77187, avg=39720.93, stdev=15310.64 00:26:01.416 clat (msec): min=6, max=488, avg=35.63, stdev=48.13 00:26:01.416 lat (msec): min=7, max=488, avg=35.67, stdev=48.12 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 271], 99.50th=[ 384], 99.90th=[ 489], 99.95th=[ 489], 00:26:01.416 | 99.99th=[ 489] 00:26:01.416 bw ( KiB/s): min= 128, max= 2304, per=4.08%, avg=1738.11, stdev=942.86, samples=19 00:26:01.416 iops : min= 32, max= 576, avg=434.53, stdev=235.71, samples=19 00:26:01.416 lat (msec) : 10=0.36%, 20=0.36%, 50=96.09%, 250=1.35%, 500=1.84% 00:26:01.416 cpu : usr=99.18%, sys=0.49%, ctx=8, majf=0, minf=44 00:26:01.416 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193190: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=443, BW=1775KiB/s (1818kB/s)(17.4MiB/10014msec) 00:26:01.416 slat (nsec): min=7022, max=63932, avg=22630.89, stdev=6673.53 00:26:01.416 clat (msec): min=15, max=537, avg=35.85, stdev=46.17 00:26:01.416 lat (msec): min=15, max=537, avg=35.87, stdev=46.17 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 266], 99.50th=[ 326], 99.90th=[ 456], 99.95th=[ 456], 00:26:01.416 | 99.99th=[ 542] 00:26:01.416 bw ( KiB/s): min= 128, max= 2304, per=4.16%, avg=1771.20, stdev=923.14, samples=20 00:26:01.416 iops : min= 32, max= 576, avg=442.80, stdev=230.78, samples=20 00:26:01.416 lat (msec) : 20=0.36%, 50=96.26%, 250=1.67%, 500=1.67%, 750=0.05% 00:26:01.416 cpu : usr=98.86%, sys=0.76%, ctx=14, majf=0, minf=47 00:26:01.416 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193191: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=443, BW=1772KiB/s (1815kB/s)(17.3MiB/10002msec) 00:26:01.416 slat (nsec): min=5645, max=88629, avg=48329.76, stdev=19279.16 00:26:01.416 clat (msec): min=11, max=521, avg=35.68, stdev=48.61 00:26:01.416 lat (msec): min=11, max=521, avg=35.73, stdev=48.60 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 271], 99.50th=[ 393], 99.90th=[ 489], 99.95th=[ 489], 00:26:01.416 | 99.99th=[ 523] 00:26:01.416 bw ( KiB/s): min= 128, max= 2304, per=4.08%, avg=1738.11, stdev=944.67, samples=19 00:26:01.416 iops : min= 32, max= 576, avg=434.53, stdev=236.17, samples=19 00:26:01.416 lat (msec) : 20=0.41%, 50=96.39%, 250=1.31%, 500=1.85%, 750=0.05% 00:26:01.416 cpu : usr=98.89%, sys=0.74%, ctx=15, majf=0, minf=37 00:26:01.416 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename0: (groupid=0, jobs=1): err= 0: pid=1193192: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=443, BW=1775KiB/s (1818kB/s)(17.4MiB/10014msec) 00:26:01.416 slat (nsec): min=7000, max=59029, avg=22863.51, stdev=6861.21 00:26:01.416 clat (msec): min=16, max=385, avg=35.85, stdev=43.60 00:26:01.416 lat (msec): min=16, max=385, avg=35.88, stdev=43.59 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.416 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.416 | 99.00th=[ 266], 99.50th=[ 305], 99.90th=[ 380], 99.95th=[ 384], 00:26:01.416 | 99.99th=[ 384] 00:26:01.416 bw ( KiB/s): min= 128, max= 2304, per=4.16%, avg=1771.20, stdev=917.65, samples=20 00:26:01.416 iops : min= 32, max= 576, avg=442.80, stdev=229.41, samples=20 00:26:01.416 lat (msec) : 20=0.23%, 50=96.26%, 250=1.35%, 500=2.16% 00:26:01.416 cpu : usr=98.90%, sys=0.71%, ctx=13, majf=0, minf=41 00:26:01.416 IO depths : 1=6.1%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.416 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.416 filename1: (groupid=0, jobs=1): err= 0: pid=1193193: Wed May 15 03:20:31 2024 00:26:01.416 read: IOPS=446, BW=1787KiB/s (1830kB/s)(17.5MiB/10026msec) 00:26:01.416 slat (nsec): min=6854, max=58453, avg=24415.60, stdev=9888.09 00:26:01.416 clat (msec): min=3, max=360, avg=35.58, stdev=41.49 00:26:01.416 lat (msec): min=3, max=360, avg=35.61, stdev=41.49 00:26:01.416 clat percentiles (msec): 00:26:01.416 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.416 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 30], 00:26:01.417 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:26:01.417 | 99.99th=[ 363] 00:26:01.417 bw ( KiB/s): min= 256, max= 2304, per=4.19%, avg=1785.10, stdev=879.23, samples=20 00:26:01.417 iops : min= 64, max= 576, avg=446.20, stdev=219.77, samples=20 00:26:01.417 lat (msec) : 4=0.16%, 10=0.20%, 20=0.04%, 50=95.71%, 100=0.31% 00:26:01.417 lat (msec) : 250=1.12%, 500=2.46% 00:26:01.417 cpu : usr=98.07%, sys=1.00%, ctx=110, majf=0, minf=52 00:26:01.417 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193194: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=456, BW=1826KiB/s (1870kB/s)(17.9MiB/10014msec) 00:26:01.417 slat (nsec): min=6968, max=60784, avg=20815.88, stdev=8529.62 00:26:01.417 clat (msec): min=11, max=336, avg=34.89, stdev=42.14 00:26:01.417 lat (msec): min=11, max=336, avg=34.91, stdev=42.14 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 268], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:26:01.417 | 99.99th=[ 338] 00:26:01.417 bw ( KiB/s): min= 128, max= 3056, per=4.28%, avg=1823.20, stdev=959.26, samples=20 00:26:01.417 iops : min= 32, max= 764, avg=455.80, stdev=239.82, samples=20 00:26:01.417 lat (msec) : 20=5.82%, 50=90.68%, 250=0.74%, 500=2.76% 00:26:01.417 cpu : usr=98.94%, sys=0.67%, ctx=10, majf=0, minf=58 00:26:01.417 IO depths : 1=0.2%, 2=5.8%, 4=23.1%, 8=58.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193196: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=441, BW=1766KiB/s (1808kB/s)(17.2MiB/10003msec) 00:26:01.417 slat (nsec): min=7021, max=91748, avg=48628.88, stdev=19121.83 00:26:01.417 clat (msec): min=21, max=503, avg=35.83, stdev=48.64 00:26:01.417 lat (msec): min=21, max=503, avg=35.87, stdev=48.64 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 271], 99.50th=[ 393], 99.90th=[ 506], 99.95th=[ 506], 00:26:01.417 | 99.99th=[ 506] 00:26:01.417 bw ( KiB/s): min= 256, max= 2304, per=4.31%, avg=1834.67, stdev=869.13, samples=18 00:26:01.417 iops : min= 64, max= 576, avg=458.67, stdev=217.28, samples=18 00:26:01.417 lat (msec) : 50=96.74%, 250=1.45%, 500=1.45%, 750=0.36% 00:26:01.417 cpu : usr=98.82%, sys=0.79%, ctx=25, majf=0, minf=35 00:26:01.417 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193197: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=443, BW=1775KiB/s (1818kB/s)(17.4MiB/10014msec) 00:26:01.417 slat (nsec): min=6996, max=62772, avg=23340.54, stdev=7376.44 00:26:01.417 clat (msec): min=15, max=414, avg=35.84, stdev=45.45 00:26:01.417 lat (msec): min=15, max=414, avg=35.86, stdev=45.45 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 279], 99.50th=[ 338], 99.90th=[ 414], 99.95th=[ 414], 00:26:01.417 | 99.99th=[ 414] 00:26:01.417 bw ( KiB/s): min= 128, max= 2304, per=4.16%, avg=1771.20, stdev=923.14, samples=20 00:26:01.417 iops : min= 32, max= 576, avg=442.80, stdev=230.78, samples=20 00:26:01.417 lat (msec) : 20=0.23%, 50=96.40%, 250=1.26%, 500=2.12% 00:26:01.417 cpu : usr=98.88%, sys=0.74%, ctx=10, majf=0, minf=55 00:26:01.417 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193198: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=443, BW=1772KiB/s (1815kB/s)(17.3MiB/10002msec) 00:26:01.417 slat (nsec): min=5214, max=86645, avg=47918.33, stdev=19911.71 00:26:01.417 clat (msec): min=11, max=490, avg=35.66, stdev=48.05 00:26:01.417 lat (msec): min=11, max=490, avg=35.71, stdev=48.04 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 271], 99.50th=[ 384], 99.90th=[ 489], 99.95th=[ 489], 00:26:01.417 | 99.99th=[ 489] 00:26:01.417 bw ( KiB/s): min= 128, max= 2304, per=4.08%, avg=1738.11, stdev=942.84, samples=19 00:26:01.417 iops : min= 32, max= 576, avg=434.53, stdev=235.71, samples=19 00:26:01.417 lat (msec) : 20=0.36%, 50=96.39%, 250=1.44%, 500=1.81% 00:26:01.417 cpu : usr=98.90%, sys=0.72%, ctx=17, majf=0, minf=41 00:26:01.417 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193199: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=443, BW=1775KiB/s (1818kB/s)(17.4MiB/10014msec) 00:26:01.417 slat (nsec): min=6969, max=63904, avg=22683.93, stdev=7035.09 00:26:01.417 clat (msec): min=15, max=415, avg=35.84, stdev=45.48 00:26:01.417 lat (msec): min=15, max=415, avg=35.87, stdev=45.48 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 279], 99.50th=[ 334], 99.90th=[ 414], 99.95th=[ 414], 00:26:01.417 | 99.99th=[ 414] 00:26:01.417 bw ( KiB/s): min= 128, max= 2304, per=4.16%, avg=1771.20, stdev=923.14, samples=20 00:26:01.417 iops : min= 32, max= 576, avg=442.80, stdev=230.78, samples=20 00:26:01.417 lat (msec) : 20=0.23%, 50=96.40%, 250=1.26%, 500=2.12% 00:26:01.417 cpu : usr=98.91%, sys=0.71%, ctx=10, majf=0, minf=35 00:26:01.417 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193200: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10006msec) 00:26:01.417 slat (nsec): min=5845, max=85490, avg=39334.30, stdev=23772.24 00:26:01.417 clat (msec): min=6, max=487, avg=35.24, stdev=48.93 00:26:01.417 lat (msec): min=6, max=487, avg=35.28, stdev=48.92 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 32], 00:26:01.417 | 99.00th=[ 271], 99.50th=[ 384], 99.90th=[ 489], 99.95th=[ 489], 00:26:01.417 | 99.99th=[ 489] 00:26:01.417 bw ( KiB/s): min= 128, max= 2416, per=4.13%, avg=1758.32, stdev=965.55, samples=19 00:26:01.417 iops : min= 32, max= 604, avg=439.58, stdev=241.39, samples=19 00:26:01.417 lat (msec) : 10=0.36%, 20=1.73%, 50=94.93%, 250=0.89%, 500=2.09% 00:26:01.417 cpu : usr=98.86%, sys=0.75%, ctx=15, majf=0, minf=56 00:26:01.417 IO depths : 1=4.8%, 2=9.7%, 4=20.1%, 8=57.0%, 16=8.4%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename1: (groupid=0, jobs=1): err= 0: pid=1193201: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=443, BW=1773KiB/s (1815kB/s)(17.3MiB/10014msec) 00:26:01.417 slat (nsec): min=7052, max=65036, avg=23667.87, stdev=7197.61 00:26:01.417 clat (msec): min=16, max=437, avg=35.89, stdev=45.20 00:26:01.417 lat (msec): min=16, max=437, avg=35.92, stdev=45.19 00:26:01.417 clat percentiles (msec): 00:26:01.417 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.417 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.417 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.417 | 99.00th=[ 271], 99.50th=[ 355], 99.90th=[ 401], 99.95th=[ 401], 00:26:01.417 | 99.99th=[ 439] 00:26:01.417 bw ( KiB/s): min= 128, max= 2304, per=4.15%, avg=1768.80, stdev=921.90, samples=20 00:26:01.417 iops : min= 32, max= 576, avg=442.20, stdev=230.48, samples=20 00:26:01.417 lat (msec) : 20=0.36%, 50=96.26%, 250=1.08%, 500=2.30% 00:26:01.417 cpu : usr=98.81%, sys=0.81%, ctx=14, majf=0, minf=48 00:26:01.417 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.417 issued rwts: total=4438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.417 filename2: (groupid=0, jobs=1): err= 0: pid=1193202: Wed May 15 03:20:31 2024 00:26:01.417 read: IOPS=442, BW=1769KiB/s (1812kB/s)(17.3MiB/10011msec) 00:26:01.417 slat (nsec): min=4323, max=57661, avg=21745.88, stdev=7462.16 00:26:01.418 clat (msec): min=16, max=372, avg=36.00, stdev=43.86 00:26:01.418 lat (msec): min=16, max=372, avg=36.02, stdev=43.86 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 359], 99.95th=[ 372], 00:26:01.418 | 99.99th=[ 372] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.09%, avg=1742.89, stdev=933.78, samples=19 00:26:01.418 iops : min= 32, max= 576, avg=435.68, stdev=233.42, samples=19 00:26:01.418 lat (msec) : 20=0.23%, 50=96.25%, 250=1.13%, 500=2.39% 00:26:01.418 cpu : usr=98.85%, sys=0.77%, ctx=17, majf=0, minf=43 00:26:01.418 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193203: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=442, BW=1771KiB/s (1813kB/s)(17.3MiB/10012msec) 00:26:01.418 slat (usec): min=7, max=261, avg=36.97, stdev=15.31 00:26:01.418 clat (msec): min=22, max=356, avg=35.85, stdev=43.59 00:26:01.418 lat (msec): min=22, max=356, avg=35.88, stdev=43.58 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 30], 00:26:01.418 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 355], 99.95th=[ 355], 00:26:01.418 | 99.99th=[ 355] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.15%, avg=1766.40, stdev=912.31, samples=20 00:26:01.418 iops : min= 32, max= 576, avg=441.60, stdev=228.08, samples=20 00:26:01.418 lat (msec) : 50=96.39%, 250=1.44%, 500=2.17% 00:26:01.418 cpu : usr=98.01%, sys=1.22%, ctx=53, majf=0, minf=35 00:26:01.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193204: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=441, BW=1768KiB/s (1810kB/s)(17.3MiB/10005msec) 00:26:01.418 slat (nsec): min=6599, max=77512, avg=14826.82, stdev=13464.81 00:26:01.418 clat (msec): min=5, max=666, avg=36.13, stdev=57.04 00:26:01.418 lat (msec): min=5, max=666, avg=36.14, stdev=57.04 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 667], 99.95th=[ 667], 00:26:01.418 | 99.99th=[ 667] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.28%, avg=1823.11, stdev=897.97, samples=18 00:26:01.418 iops : min= 32, max= 576, avg=455.78, stdev=224.49, samples=18 00:26:01.418 lat (msec) : 10=0.50%, 20=0.45%, 50=96.52%, 250=1.00%, 500=1.18% 00:26:01.418 lat (msec) : 750=0.36% 00:26:01.418 cpu : usr=99.06%, sys=0.55%, ctx=18, majf=0, minf=44 00:26:01.418 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=79.1%, 16=18.0%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=89.8%, 8=9.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193206: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.3MiB/10005msec) 00:26:01.418 slat (nsec): min=7055, max=86701, avg=48304.52, stdev=19418.44 00:26:01.418 clat (msec): min=11, max=493, avg=35.67, stdev=48.36 00:26:01.418 lat (msec): min=11, max=493, avg=35.72, stdev=48.35 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 271], 99.50th=[ 393], 99.90th=[ 493], 99.95th=[ 493], 00:26:01.418 | 99.99th=[ 493] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.08%, avg=1738.11, stdev=944.67, samples=19 00:26:01.418 iops : min= 32, max= 576, avg=434.53, stdev=236.17, samples=19 00:26:01.418 lat (msec) : 20=0.36%, 50=96.44%, 250=1.40%, 500=1.81% 00:26:01.418 cpu : usr=98.83%, sys=0.77%, ctx=23, majf=0, minf=45 00:26:01.418 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193207: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=444, BW=1780KiB/s (1822kB/s)(17.4MiB/10024msec) 00:26:01.418 slat (nsec): min=5362, max=79010, avg=31592.95, stdev=17933.72 00:26:01.418 clat (msec): min=19, max=453, avg=35.64, stdev=44.92 00:26:01.418 lat (msec): min=19, max=453, avg=35.67, stdev=44.91 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 266], 99.50th=[ 342], 99.90th=[ 456], 99.95th=[ 456], 00:26:01.418 | 99.99th=[ 456] 00:26:01.418 bw ( KiB/s): min= 192, max= 2304, per=4.17%, avg=1777.35, stdev=897.60, samples=20 00:26:01.418 iops : min= 48, max= 576, avg=444.30, stdev=224.38, samples=20 00:26:01.418 lat (msec) : 20=0.36%, 50=95.92%, 100=0.36%, 250=1.93%, 500=1.43% 00:26:01.418 cpu : usr=98.83%, sys=0.79%, ctx=9, majf=0, minf=32 00:26:01.418 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193208: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=439, BW=1759KiB/s (1802kB/s)(17.2MiB/10004msec) 00:26:01.418 slat (nsec): min=6348, max=79071, avg=29935.17, stdev=18245.99 00:26:01.418 clat (msec): min=7, max=667, avg=36.05, stdev=59.50 00:26:01.418 lat (msec): min=7, max=667, avg=36.08, stdev=59.50 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 667], 99.95th=[ 667], 00:26:01.418 | 99.99th=[ 667] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.27%, avg=1820.44, stdev=914.16, samples=18 00:26:01.418 iops : min= 32, max= 576, avg=455.11, stdev=228.54, samples=18 00:26:01.418 lat (msec) : 10=0.36%, 20=0.16%, 50=97.30%, 250=0.05%, 500=1.77% 00:26:01.418 lat (msec) : 750=0.36% 00:26:01.418 cpu : usr=98.85%, sys=0.76%, ctx=15, majf=0, minf=47 00:26:01.418 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193209: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=443, BW=1773KiB/s (1815kB/s)(17.3MiB/10001msec) 00:26:01.418 slat (nsec): min=5746, max=86511, avg=45461.59, stdev=20028.81 00:26:01.418 clat (msec): min=11, max=489, avg=35.67, stdev=48.02 00:26:01.418 lat (msec): min=11, max=489, avg=35.71, stdev=48.01 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 271], 99.50th=[ 384], 99.90th=[ 489], 99.95th=[ 489], 00:26:01.418 | 99.99th=[ 489] 00:26:01.418 bw ( KiB/s): min= 128, max= 2304, per=4.08%, avg=1738.11, stdev=942.84, samples=19 00:26:01.418 iops : min= 32, max= 576, avg=434.53, stdev=235.71, samples=19 00:26:01.418 lat (msec) : 20=0.36%, 50=96.39%, 250=1.44%, 500=1.81% 00:26:01.418 cpu : usr=98.75%, sys=0.81%, ctx=18, majf=0, minf=38 00:26:01.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.418 filename2: (groupid=0, jobs=1): err= 0: pid=1193210: Wed May 15 03:20:31 2024 00:26:01.418 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.5MiB/10022msec) 00:26:01.418 slat (nsec): min=4125, max=79071, avg=31161.65, stdev=17735.39 00:26:01.418 clat (msec): min=17, max=378, avg=35.38, stdev=42.80 00:26:01.418 lat (msec): min=17, max=378, avg=35.41, stdev=42.80 00:26:01.418 clat percentiles (msec): 00:26:01.418 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:26:01.418 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:26:01.418 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:26:01.418 | 99.00th=[ 268], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 321], 00:26:01.418 | 99.99th=[ 380] 00:26:01.418 bw ( KiB/s): min= 127, max= 2528, per=4.20%, avg=1790.10, stdev=926.26, samples=20 00:26:01.418 iops : min= 31, max= 632, avg=447.45, stdev=231.61, samples=20 00:26:01.418 lat (msec) : 20=1.63%, 50=94.81%, 250=1.11%, 500=2.45% 00:26:01.418 cpu : usr=98.84%, sys=0.79%, ctx=18, majf=0, minf=36 00:26:01.418 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:01.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.418 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.419 issued rwts: total=4492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:01.419 00:26:01.419 Run status group 0 (all jobs): 00:26:01.419 READ: bw=41.6MiB/s (43.6MB/s), 1759KiB/s-1826KiB/s (1802kB/s-1870kB/s), io=417MiB (437MB), run=10001-10026msec 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 bdev_null0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 [2024-05-15 03:20:31.313071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 bdev_null1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:01.419 { 00:26:01.419 "params": { 00:26:01.419 "name": "Nvme$subsystem", 00:26:01.419 "trtype": "$TEST_TRANSPORT", 00:26:01.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.419 "adrfam": "ipv4", 00:26:01.419 "trsvcid": "$NVMF_PORT", 00:26:01.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.419 "hdgst": ${hdgst:-false}, 00:26:01.419 "ddgst": ${ddgst:-false} 00:26:01.419 }, 00:26:01.419 "method": "bdev_nvme_attach_controller" 00:26:01.419 } 00:26:01.419 EOF 00:26:01.419 )") 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:01.419 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:01.419 { 00:26:01.420 "params": { 00:26:01.420 "name": "Nvme$subsystem", 00:26:01.420 "trtype": "$TEST_TRANSPORT", 00:26:01.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.420 "adrfam": "ipv4", 00:26:01.420 "trsvcid": "$NVMF_PORT", 00:26:01.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.420 "hdgst": ${hdgst:-false}, 00:26:01.420 "ddgst": ${ddgst:-false} 00:26:01.420 }, 00:26:01.420 "method": "bdev_nvme_attach_controller" 00:26:01.420 } 00:26:01.420 EOF 00:26:01.420 )") 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:01.420 "params": { 00:26:01.420 "name": "Nvme0", 00:26:01.420 "trtype": "tcp", 00:26:01.420 "traddr": "10.0.0.2", 00:26:01.420 "adrfam": "ipv4", 00:26:01.420 "trsvcid": "4420", 00:26:01.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.420 "hdgst": false, 00:26:01.420 "ddgst": false 00:26:01.420 }, 00:26:01.420 "method": "bdev_nvme_attach_controller" 00:26:01.420 },{ 00:26:01.420 "params": { 00:26:01.420 "name": "Nvme1", 00:26:01.420 "trtype": "tcp", 00:26:01.420 "traddr": "10.0.0.2", 00:26:01.420 "adrfam": "ipv4", 00:26:01.420 "trsvcid": "4420", 00:26:01.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.420 "hdgst": false, 00:26:01.420 "ddgst": false 00:26:01.420 }, 00:26:01.420 "method": "bdev_nvme_attach_controller" 00:26:01.420 }' 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:01.420 03:20:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.420 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:01.420 ... 00:26:01.420 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:01.420 ... 00:26:01.420 fio-3.35 00:26:01.420 Starting 4 threads 00:26:01.420 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.686 00:26:06.686 filename0: (groupid=0, jobs=1): err= 0: pid=1195202: Wed May 15 03:20:37 2024 00:26:06.686 read: IOPS=2857, BW=22.3MiB/s (23.4MB/s)(112MiB/5003msec) 00:26:06.686 slat (nsec): min=2935, max=61729, avg=12137.24, stdev=7815.73 00:26:06.686 clat (usec): min=709, max=7915, avg=2764.05, stdev=465.38 00:26:06.686 lat (usec): min=721, max=7925, avg=2776.19, stdev=465.60 00:26:06.686 clat percentiles (usec): 00:26:06.686 | 1.00th=[ 1762], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2376], 00:26:06.686 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2868], 00:26:06.686 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3490], 00:26:06.686 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 4948], 99.95th=[ 7767], 00:26:06.686 | 99.99th=[ 7898] 00:26:06.686 bw ( KiB/s): min=21456, max=24624, per=27.46%, avg=22864.00, stdev=1178.49, samples=10 00:26:06.686 iops : min= 2682, max= 3078, avg=2858.00, stdev=147.31, samples=10 00:26:06.686 lat (usec) : 750=0.01% 00:26:06.686 lat (msec) : 2=3.04%, 4=95.52%, 10=1.43% 00:26:06.686 cpu : usr=96.94%, sys=2.72%, ctx=10, majf=0, minf=0 00:26:06.686 IO depths : 1=0.3%, 2=6.8%, 4=63.1%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.686 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.686 issued rwts: total=14295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.686 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.686 filename0: (groupid=0, jobs=1): err= 0: pid=1195203: Wed May 15 03:20:37 2024 00:26:06.686 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:26:06.686 slat (nsec): min=4316, max=59899, avg=11870.08, stdev=7693.06 00:26:06.686 clat (usec): min=730, max=6035, avg=3085.38, stdev=581.58 00:26:06.686 lat (usec): min=760, max=6048, avg=3097.25, stdev=581.14 00:26:06.686 clat percentiles (usec): 00:26:06.687 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:26:06.687 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3064], 00:26:06.687 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3884], 95.00th=[ 4359], 00:26:06.687 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5538], 99.95th=[ 5866], 00:26:06.687 | 99.99th=[ 5997] 00:26:06.687 bw ( KiB/s): min=19568, max=21168, per=24.58%, avg=20471.11, stdev=542.30, samples=9 00:26:06.687 iops : min= 2446, max= 2646, avg=2558.89, stdev=67.79, samples=9 00:26:06.687 lat (usec) : 750=0.01% 00:26:06.687 lat (msec) : 2=0.73%, 4=90.48%, 10=8.79% 00:26:06.687 cpu : usr=97.14%, sys=2.54%, ctx=11, majf=0, minf=0 00:26:06.687 IO depths : 1=0.4%, 2=3.3%, 4=68.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 issued rwts: total=12821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.687 filename1: (groupid=0, jobs=1): err= 0: pid=1195204: Wed May 15 03:20:37 2024 00:26:06.687 read: IOPS=2496, BW=19.5MiB/s (20.5MB/s)(97.6MiB/5002msec) 00:26:06.687 slat (nsec): min=6219, max=61624, avg=11695.51, stdev=7614.24 00:26:06.687 clat (usec): min=1201, max=45686, avg=3170.37, stdev=1219.76 00:26:06.687 lat (usec): min=1228, max=45711, avg=3182.07, stdev=1219.55 00:26:06.687 clat percentiles (usec): 00:26:06.687 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:26:06.687 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:26:06.687 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3916], 95.00th=[ 4424], 00:26:06.687 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[45876], 00:26:06.687 | 99.99th=[45876] 00:26:06.687 bw ( KiB/s): min=18464, max=20816, per=23.91%, avg=19911.11, stdev=802.72, samples=9 00:26:06.687 iops : min= 2308, max= 2602, avg=2488.89, stdev=100.34, samples=9 00:26:06.687 lat (msec) : 2=0.62%, 4=90.18%, 10=9.13%, 50=0.06% 00:26:06.687 cpu : usr=97.08%, sys=2.58%, ctx=13, majf=0, minf=9 00:26:06.687 IO depths : 1=0.4%, 2=2.8%, 4=69.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 issued rwts: total=12487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.687 filename1: (groupid=0, jobs=1): err= 0: pid=1195205: Wed May 15 03:20:37 2024 00:26:06.687 read: IOPS=2494, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5002msec) 00:26:06.687 slat (usec): min=6, max=317, avg=13.42, stdev= 8.58 00:26:06.687 clat (usec): min=909, max=5623, avg=3168.11, stdev=543.13 00:26:06.687 lat (usec): min=919, max=5630, avg=3181.53, stdev=542.28 00:26:06.687 clat percentiles (usec): 00:26:06.687 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2802], 00:26:06.687 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3130], 00:26:06.687 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3884], 95.00th=[ 4424], 00:26:06.687 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5276], 99.95th=[ 5342], 00:26:06.687 | 99.99th=[ 5604] 00:26:06.687 bw ( KiB/s): min=19184, max=20960, per=23.96%, avg=19950.40, stdev=444.97, samples=10 00:26:06.687 iops : min= 2398, max= 2620, avg=2493.80, stdev=55.62, samples=10 00:26:06.687 lat (usec) : 1000=0.02% 00:26:06.687 lat (msec) : 2=0.46%, 4=90.59%, 10=8.92% 00:26:06.687 cpu : usr=89.76%, sys=5.56%, ctx=169, majf=0, minf=9 00:26:06.687 IO depths : 1=0.1%, 2=1.8%, 4=71.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.687 issued rwts: total=12477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.687 00:26:06.687 Run status group 0 (all jobs): 00:26:06.687 READ: bw=81.3MiB/s (85.3MB/s), 19.5MiB/s-22.3MiB/s (20.4MB/s-23.4MB/s), io=407MiB (427MB), run=5001-5003msec 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 00:26:06.687 real 0m24.370s 00:26:06.687 user 4m52.039s 00:26:06.687 sys 0m4.014s 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 ************************************ 00:26:06.687 END TEST fio_dif_rand_params 00:26:06.687 ************************************ 00:26:06.687 03:20:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:06.687 03:20:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:06.687 03:20:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 ************************************ 00:26:06.687 START TEST fio_dif_digest 00:26:06.687 ************************************ 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 bdev_null0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:06.687 [2024-05-15 03:20:37.747003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.687 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.687 { 00:26:06.687 "params": { 00:26:06.687 "name": "Nvme$subsystem", 00:26:06.687 "trtype": "$TEST_TRANSPORT", 00:26:06.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.688 "adrfam": "ipv4", 00:26:06.688 "trsvcid": "$NVMF_PORT", 00:26:06.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.688 "hdgst": ${hdgst:-false}, 00:26:06.688 "ddgst": ${ddgst:-false} 00:26:06.688 }, 00:26:06.688 "method": "bdev_nvme_attach_controller" 00:26:06.688 } 00:26:06.688 EOF 00:26:06.688 )") 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:06.688 "params": { 00:26:06.688 "name": "Nvme0", 00:26:06.688 "trtype": "tcp", 00:26:06.688 "traddr": "10.0.0.2", 00:26:06.688 "adrfam": "ipv4", 00:26:06.688 "trsvcid": "4420", 00:26:06.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.688 "hdgst": true, 00:26:06.688 "ddgst": true 00:26:06.688 }, 00:26:06.688 "method": "bdev_nvme_attach_controller" 00:26:06.688 }' 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:06.688 03:20:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.946 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:06.946 ... 00:26:06.946 fio-3.35 00:26:06.946 Starting 3 threads 00:26:06.946 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.153 00:26:19.153 filename0: (groupid=0, jobs=1): err= 0: pid=1196270: Wed May 15 03:20:48 2024 00:26:19.153 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10048msec) 00:26:19.153 slat (nsec): min=6623, max=28922, avg=11726.23, stdev=2094.05 00:26:19.153 clat (usec): min=8001, max=51944, avg=10678.76, stdev=1311.58 00:26:19.153 lat (usec): min=8014, max=51956, avg=10690.49, stdev=1311.46 00:26:19.153 clat percentiles (usec): 00:26:19.153 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:26:19.153 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:26:19.153 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:26:19.153 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13304], 99.95th=[49021], 00:26:19.153 | 99.99th=[52167] 00:26:19.153 bw ( KiB/s): min=34885, max=37120, per=34.28%, avg=36014.90, stdev=566.33, samples=20 00:26:19.153 iops : min= 272, max= 290, avg=281.20, stdev= 4.51, samples=20 00:26:19.153 lat (msec) : 10=19.54%, 20=80.39%, 50=0.04%, 100=0.04% 00:26:19.153 cpu : usr=94.21%, sys=5.48%, ctx=22, majf=0, minf=148 00:26:19.153 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.153 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.153 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.153 filename0: (groupid=0, jobs=1): err= 0: pid=1196271: Wed May 15 03:20:48 2024 00:26:19.153 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(337MiB/10046msec) 00:26:19.153 slat (nsec): min=6612, max=27818, avg=11818.05, stdev=2150.51 00:26:19.153 clat (usec): min=8442, max=46892, avg=11161.11, stdev=1240.85 00:26:19.153 lat (usec): min=8455, max=46903, avg=11172.93, stdev=1240.82 00:26:19.153 clat percentiles (usec): 00:26:19.153 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:26:19.153 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:26:19.153 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12518], 00:26:19.153 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14484], 99.95th=[45876], 00:26:19.153 | 99.99th=[46924] 00:26:19.153 bw ( KiB/s): min=33346, max=35328, per=32.79%, avg=34448.10, stdev=472.29, samples=20 00:26:19.153 iops : min= 260, max= 276, avg=269.10, stdev= 3.75, samples=20 00:26:19.153 lat (msec) : 10=7.20%, 20=92.72%, 50=0.07% 00:26:19.153 cpu : usr=94.41%, sys=5.31%, ctx=24, majf=0, minf=111 00:26:19.153 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.154 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.154 filename0: (groupid=0, jobs=1): err= 0: pid=1196272: Wed May 15 03:20:48 2024 00:26:19.154 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(342MiB/10045msec) 00:26:19.154 slat (nsec): min=6619, max=27405, avg=11652.46, stdev=2235.15 00:26:19.154 clat (usec): min=7905, max=50433, avg=10976.09, stdev=1292.69 00:26:19.154 lat (usec): min=7912, max=50441, avg=10987.74, stdev=1292.64 00:26:19.154 clat percentiles (usec): 00:26:19.154 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:26:19.154 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:26:19.154 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:26:19.154 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14484], 99.95th=[45351], 00:26:19.154 | 99.99th=[50594] 00:26:19.154 bw ( KiB/s): min=34048, max=35840, per=33.34%, avg=35020.80, stdev=444.18, samples=20 00:26:19.154 iops : min= 266, max= 280, avg=273.60, stdev= 3.47, samples=20 00:26:19.154 lat (msec) : 10=10.96%, 20=88.97%, 50=0.04%, 100=0.04% 00:26:19.154 cpu : usr=94.34%, sys=5.36%, ctx=44, majf=0, minf=118 00:26:19.154 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.154 issued rwts: total=2738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.154 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.154 00:26:19.154 Run status group 0 (all jobs): 00:26:19.154 READ: bw=103MiB/s (108MB/s), 33.5MiB/s-35.0MiB/s (35.1MB/s-36.7MB/s), io=1031MiB (1081MB), run=10045-10048msec 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.154 00:26:19.154 real 0m11.118s 00:26:19.154 user 0m34.724s 00:26:19.154 sys 0m1.889s 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.154 03:20:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:19.154 ************************************ 00:26:19.154 END TEST fio_dif_digest 00:26:19.154 ************************************ 00:26:19.154 03:20:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:19.154 03:20:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.154 rmmod nvme_tcp 00:26:19.154 rmmod nvme_fabrics 00:26:19.154 rmmod nvme_keyring 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1187489 ']' 00:26:19.154 03:20:48 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1187489 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1187489 ']' 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1187489 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1187489 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1187489' 00:26:19.154 killing process with pid 1187489 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1187489 00:26:19.154 [2024-05-15 03:20:48.951784] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:19.154 03:20:48 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1187489 00:26:19.154 03:20:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:19.154 03:20:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:20.529 Waiting for block devices as requested 00:26:20.529 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:20.529 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:20.529 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:20.529 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:20.789 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:20.789 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:20.789 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:20.789 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:21.048 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:21.048 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:21.048 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:21.048 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:21.307 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:21.307 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:21.307 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:21.567 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:21.567 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:21.567 03:20:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.567 03:20:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.567 03:20:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.567 03:20:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.567 03:20:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.567 03:20:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:21.567 03:20:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.100 03:20:54 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.101 00:26:24.101 real 1m12.522s 00:26:24.101 user 7m8.610s 00:26:24.101 sys 0m17.499s 00:26:24.101 03:20:54 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:24.101 03:20:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:24.101 ************************************ 00:26:24.101 END TEST nvmf_dif 00:26:24.101 ************************************ 00:26:24.101 03:20:54 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:24.101 03:20:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:24.101 03:20:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:24.101 03:20:54 -- common/autotest_common.sh@10 -- # set +x 00:26:24.101 ************************************ 00:26:24.101 START TEST nvmf_abort_qd_sizes 00:26:24.101 ************************************ 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:24.101 * Looking for test storage... 00:26:24.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.101 03:20:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:29.377 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:29.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:29.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:29.378 Found net devices under 0000:86:00.0: cvl_0_0 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:29.378 Found net devices under 0000:86:00.1: cvl_0_1 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:29.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:26:29.378 00:26:29.378 --- 10.0.0.2 ping statistics --- 00:26:29.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.378 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:26:29.378 00:26:29.378 --- 10.0.0.1 ping statistics --- 00:26:29.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.378 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:29.378 03:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:31.913 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:31.913 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:32.848 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1204181 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1204181 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1204181 ']' 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:32.848 03:21:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:32.848 [2024-05-15 03:21:03.898979] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:26:32.848 [2024-05-15 03:21:03.899024] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.848 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.848 [2024-05-15 03:21:03.958115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.107 [2024-05-15 03:21:04.040215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.107 [2024-05-15 03:21:04.040251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.107 [2024-05-15 03:21:04.040259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.107 [2024-05-15 03:21:04.040266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.107 [2024-05-15 03:21:04.040271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.107 [2024-05-15 03:21:04.040319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.107 [2024-05-15 03:21:04.040419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.107 [2024-05-15 03:21:04.040509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.107 [2024-05-15 03:21:04.040511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:33.675 03:21:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:33.675 ************************************ 00:26:33.675 START TEST spdk_target_abort 00:26:33.675 ************************************ 00:26:33.675 03:21:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:26:33.675 03:21:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:33.675 03:21:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:26:33.675 03:21:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.675 03:21:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.964 spdk_targetn1 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.964 [2024-05-15 03:21:07.622100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.964 [2024-05-15 03:21:07.654812] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:36.964 [2024-05-15 03:21:07.655038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:36.964 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.965 03:21:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:36.965 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.251 Initializing NVMe Controllers 00:26:40.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:40.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:40.251 Initialization complete. Launching workers. 00:26:40.251 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15114, failed: 0 00:26:40.251 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1386, failed to submit 13728 00:26:40.251 success 776, unsuccess 610, failed 0 00:26:40.251 03:21:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:40.251 03:21:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:40.251 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.628 Initializing NVMe Controllers 00:26:43.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:43.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:43.628 Initialization complete. Launching workers. 00:26:43.628 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8655, failed: 0 00:26:43.628 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7410 00:26:43.628 success 316, unsuccess 929, failed 0 00:26:43.628 03:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:43.628 03:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:43.628 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.190 Initializing NVMe Controllers 00:26:46.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:46.190 Initialization complete. Launching workers. 00:26:46.190 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37296, failed: 0 00:26:46.190 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2804, failed to submit 34492 00:26:46.190 success 599, unsuccess 2205, failed 0 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.190 03:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1204181 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1204181 ']' 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1204181 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1204181 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1204181' 00:26:47.568 killing process with pid 1204181 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1204181 00:26:47.568 [2024-05-15 03:21:18.650259] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:47.568 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1204181 00:26:47.828 00:26:47.828 real 0m14.065s 00:26:47.828 user 0m55.976s 00:26:47.828 sys 0m2.310s 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:47.828 ************************************ 00:26:47.828 END TEST spdk_target_abort 00:26:47.828 ************************************ 00:26:47.828 03:21:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:47.828 03:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:47.828 03:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:47.828 03:21:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.828 ************************************ 00:26:47.828 START TEST kernel_target_abort 00:26:47.828 ************************************ 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:47.828 03:21:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:50.362 Waiting for block devices as requested 00:26:50.362 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:50.362 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:50.362 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:50.362 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:50.362 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:50.362 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:50.621 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:50.621 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:50.621 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:50.621 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:50.880 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:50.880 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:50.880 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:51.139 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:51.139 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:51.139 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:51.398 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:51.398 No valid GPT data, bailing 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:51.398 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:51.398 00:26:51.398 Discovery Log Number of Records 2, Generation counter 2 00:26:51.398 =====Discovery Log Entry 0====== 00:26:51.398 trtype: tcp 00:26:51.398 adrfam: ipv4 00:26:51.398 subtype: current discovery subsystem 00:26:51.398 treq: not specified, sq flow control disable supported 00:26:51.398 portid: 1 00:26:51.398 trsvcid: 4420 00:26:51.398 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:51.398 traddr: 10.0.0.1 00:26:51.398 eflags: none 00:26:51.398 sectype: none 00:26:51.398 =====Discovery Log Entry 1====== 00:26:51.398 trtype: tcp 00:26:51.398 adrfam: ipv4 00:26:51.398 subtype: nvme subsystem 00:26:51.398 treq: not specified, sq flow control disable supported 00:26:51.399 portid: 1 00:26:51.399 trsvcid: 4420 00:26:51.399 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:51.399 traddr: 10.0.0.1 00:26:51.399 eflags: none 00:26:51.399 sectype: none 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:51.399 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:51.657 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:51.657 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:51.657 03:21:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:51.657 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.943 Initializing NVMe Controllers 00:26:54.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:54.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:54.943 Initialization complete. Launching workers. 00:26:54.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83715, failed: 0 00:26:54.943 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 83715, failed to submit 0 00:26:54.943 success 0, unsuccess 83715, failed 0 00:26:54.943 03:21:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:54.943 03:21:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:54.943 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.226 Initializing NVMe Controllers 00:26:58.226 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.226 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:58.226 Initialization complete. Launching workers. 00:26:58.226 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136771, failed: 0 00:26:58.226 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33966, failed to submit 102805 00:26:58.226 success 0, unsuccess 33966, failed 0 00:26:58.226 03:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:58.226 03:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.226 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.759 Initializing NVMe Controllers 00:27:00.759 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.759 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:00.759 Initialization complete. Launching workers. 00:27:00.759 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 129133, failed: 0 00:27:00.759 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32306, failed to submit 96827 00:27:00.759 success 0, unsuccess 32306, failed 0 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:00.759 03:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:03.341 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:03.341 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:03.601 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:03.601 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:03.601 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:03.601 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:04.168 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:04.427 00:27:04.427 real 0m16.515s 00:27:04.427 user 0m8.195s 00:27:04.427 sys 0m4.530s 00:27:04.427 03:21:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:04.427 03:21:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.427 ************************************ 00:27:04.427 END TEST kernel_target_abort 00:27:04.427 ************************************ 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.427 rmmod nvme_tcp 00:27:04.427 rmmod nvme_fabrics 00:27:04.427 rmmod nvme_keyring 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1204181 ']' 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1204181 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1204181 ']' 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1204181 00:27:04.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1204181) - No such process 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1204181 is not found' 00:27:04.427 Process with pid 1204181 is not found 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:04.427 03:21:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.974 Waiting for block devices as requested 00:27:06.974 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:06.974 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:06.974 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:06.974 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:06.974 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:06.974 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:06.974 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:07.284 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:07.284 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:07.284 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:07.284 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:07.284 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:07.543 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:07.543 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:07.543 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:07.801 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:07.801 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:07.801 03:21:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.338 03:21:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.338 00:27:10.338 real 0m46.142s 00:27:10.338 user 1m7.840s 00:27:10.338 sys 0m14.531s 00:27:10.338 03:21:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:10.338 03:21:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:10.338 ************************************ 00:27:10.338 END TEST nvmf_abort_qd_sizes 00:27:10.338 ************************************ 00:27:10.338 03:21:40 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:10.338 03:21:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:10.338 03:21:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:10.338 03:21:40 -- common/autotest_common.sh@10 -- # set +x 00:27:10.338 ************************************ 00:27:10.338 START TEST keyring_file 00:27:10.338 ************************************ 00:27:10.338 03:21:41 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:10.338 * Looking for test storage... 00:27:10.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:10.338 03:21:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:10.338 03:21:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.338 03:21:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.338 03:21:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.338 03:21:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.338 03:21:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.338 03:21:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.338 03:21:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.338 03:21:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:10.338 03:21:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.338 03:21:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KezmSX5vz1 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KezmSX5vz1 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KezmSX5vz1 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.KezmSX5vz1 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.awp8WKNWlH 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:10.339 03:21:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.awp8WKNWlH 00:27:10.339 03:21:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.awp8WKNWlH 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.awp8WKNWlH 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=1213190 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1213190 00:27:10.339 03:21:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1213190 ']' 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:10.339 03:21:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:10.339 [2024-05-15 03:21:41.284751] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:10.339 [2024-05-15 03:21:41.284800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213190 ] 00:27:10.339 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.339 [2024-05-15 03:21:41.339887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.339 [2024-05-15 03:21:41.413144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:11.275 03:21:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:11.275 [2024-05-15 03:21:42.084553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.275 null0 00:27:11.275 [2024-05-15 03:21:42.116583] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:11.275 [2024-05-15 03:21:42.116626] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:11.275 [2024-05-15 03:21:42.116821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:11.275 [2024-05-15 03:21:42.124618] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.275 03:21:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:11.275 [2024-05-15 03:21:42.136652] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:11.275 request: 00:27:11.275 { 00:27:11.275 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.275 "secure_channel": false, 00:27:11.275 "listen_address": { 00:27:11.275 "trtype": "tcp", 00:27:11.275 "traddr": "127.0.0.1", 00:27:11.275 "trsvcid": "4420" 00:27:11.275 }, 00:27:11.275 "method": "nvmf_subsystem_add_listener", 00:27:11.275 "req_id": 1 00:27:11.275 } 00:27:11.275 Got JSON-RPC error response 00:27:11.275 response: 00:27:11.275 { 00:27:11.275 "code": -32602, 00:27:11.275 "message": "Invalid parameters" 00:27:11.275 } 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:11.275 03:21:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=1213349 00:27:11.275 03:21:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1213349 /var/tmp/bperf.sock 00:27:11.275 03:21:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1213349 ']' 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:11.275 03:21:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:11.275 [2024-05-15 03:21:42.189964] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:11.275 [2024-05-15 03:21:42.190005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213349 ] 00:27:11.275 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.275 [2024-05-15 03:21:42.242077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.275 [2024-05-15 03:21:42.321819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.843 03:21:42 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.843 03:21:42 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:11.843 03:21:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:11.843 03:21:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:12.101 03:21:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.awp8WKNWlH 00:27:12.101 03:21:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.awp8WKNWlH 00:27:12.359 03:21:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:12.359 03:21:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:12.359 03:21:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:12.359 03:21:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.359 03:21:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.617 03:21:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.KezmSX5vz1 == \/\t\m\p\/\t\m\p\.\K\e\z\m\S\X\5\v\z\1 ]] 00:27:12.617 03:21:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:12.617 03:21:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.617 03:21:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.awp8WKNWlH == \/\t\m\p\/\t\m\p\.\a\w\p\8\W\K\N\W\l\H ]] 00:27:12.617 03:21:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:12.617 03:21:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.876 03:21:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:12.876 03:21:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:12.876 03:21:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:12.876 03:21:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:12.876 03:21:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.876 03:21:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:12.876 03:21:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:13.135 03:21:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:13.135 03:21:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.135 03:21:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.135 [2024-05-15 03:21:44.219007] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:13.135 nvme0n1 00:27:13.395 03:21:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:13.395 03:21:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:13.395 03:21:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:13.395 03:21:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:13.655 03:21:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:13.655 03:21:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.655 Running I/O for 1 seconds... 00:27:14.591 00:27:14.591 Latency(us) 00:27:14.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.591 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:14.591 nvme0n1 : 1.00 15368.75 60.03 0.00 0.00 8308.88 4188.61 15614.66 00:27:14.591 =================================================================================================================== 00:27:14.591 Total : 15368.75 60.03 0.00 0.00 8308.88 4188.61 15614.66 00:27:14.591 0 00:27:14.849 03:21:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:14.849 03:21:45 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:14.849 03:21:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.108 03:21:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:15.108 03:21:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:15.108 03:21:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:15.108 03:21:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.108 03:21:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.108 03:21:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.108 03:21:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:15.367 03:21:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:15.367 03:21:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:15.367 [2024-05-15 03:21:46.480613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:15.367 [2024-05-15 03:21:46.481044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e3d50 (107): Transport endpoint is not connected 00:27:15.367 [2024-05-15 03:21:46.482037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e3d50 (9): Bad file descriptor 00:27:15.367 [2024-05-15 03:21:46.483038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:15.367 [2024-05-15 03:21:46.483048] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:15.367 [2024-05-15 03:21:46.483055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:15.367 request: 00:27:15.367 { 00:27:15.367 "name": "nvme0", 00:27:15.367 "trtype": "tcp", 00:27:15.367 "traddr": "127.0.0.1", 00:27:15.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:15.367 "adrfam": "ipv4", 00:27:15.367 "trsvcid": "4420", 00:27:15.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.367 "psk": "key1", 00:27:15.367 "method": "bdev_nvme_attach_controller", 00:27:15.367 "req_id": 1 00:27:15.367 } 00:27:15.367 Got JSON-RPC error response 00:27:15.367 response: 00:27:15.367 { 00:27:15.367 "code": -32602, 00:27:15.367 "message": "Invalid parameters" 00:27:15.367 } 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:15.367 03:21:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:15.367 03:21:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:15.367 03:21:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.626 03:21:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:15.626 03:21:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:15.626 03:21:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.626 03:21:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:15.626 03:21:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:15.626 03:21:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.626 03:21:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.884 03:21:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:15.884 03:21:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:15.884 03:21:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:15.884 03:21:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:15.884 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:16.142 03:21:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:16.142 03:21:47 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:16.142 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.401 03:21:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:16.401 03:21:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.401 [2024-05-15 03:21:47.536119] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KezmSX5vz1': 0100660 00:27:16.401 [2024-05-15 03:21:47.536145] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:16.401 request: 00:27:16.401 { 00:27:16.401 "name": "key0", 00:27:16.401 "path": "/tmp/tmp.KezmSX5vz1", 00:27:16.401 "method": "keyring_file_add_key", 00:27:16.401 "req_id": 1 00:27:16.401 } 00:27:16.401 Got JSON-RPC error response 00:27:16.401 response: 00:27:16.401 { 00:27:16.401 "code": -1, 00:27:16.401 "message": "Operation not permitted" 00:27:16.401 } 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:16.401 03:21:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:16.401 03:21:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.401 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KezmSX5vz1 00:27:16.660 03:21:47 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.KezmSX5vz1 00:27:16.660 03:21:47 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:16.660 03:21:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:16.660 03:21:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:16.660 03:21:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:16.660 03:21:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:16.660 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.919 03:21:47 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:16.919 03:21:47 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.919 03:21:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.919 03:21:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.919 [2024-05-15 03:21:48.065519] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.KezmSX5vz1': No such file or directory 00:27:16.919 [2024-05-15 03:21:48.065542] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:16.919 [2024-05-15 03:21:48.065565] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:16.919 [2024-05-15 03:21:48.065571] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:16.919 [2024-05-15 03:21:48.065577] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:16.919 request: 00:27:16.919 { 00:27:16.919 "name": "nvme0", 00:27:16.919 "trtype": "tcp", 00:27:16.919 "traddr": "127.0.0.1", 00:27:16.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:16.919 "adrfam": "ipv4", 00:27:16.919 "trsvcid": "4420", 00:27:16.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.919 "psk": "key0", 00:27:16.919 "method": "bdev_nvme_attach_controller", 00:27:16.919 "req_id": 1 00:27:16.919 } 00:27:16.919 Got JSON-RPC error response 00:27:16.919 response: 00:27:16.919 { 00:27:16.919 "code": -19, 00:27:16.919 "message": "No such device" 00:27:16.919 } 00:27:17.178 03:21:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:17.178 03:21:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:17.178 03:21:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:17.178 03:21:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:17.178 03:21:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:17.178 03:21:48 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BrC4hOR3N3 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:17.178 03:21:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BrC4hOR3N3 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BrC4hOR3N3 00:27:17.178 03:21:48 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.BrC4hOR3N3 00:27:17.178 03:21:48 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BrC4hOR3N3 00:27:17.178 03:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BrC4hOR3N3 00:27:17.436 03:21:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:17.436 03:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:17.695 nvme0n1 00:27:17.695 03:21:48 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:17.695 03:21:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:17.695 03:21:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:17.695 03:21:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:17.695 03:21:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:17.695 03:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:17.954 03:21:48 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:17.954 03:21:48 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:17.954 03:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:17.954 03:21:49 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:17.954 03:21:49 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:17.954 03:21:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:17.954 03:21:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:17.954 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.213 03:21:49 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:18.213 03:21:49 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:18.213 03:21:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:18.213 03:21:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:18.213 03:21:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.213 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.213 03:21:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:18.472 03:21:49 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:18.472 03:21:49 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:18.472 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:18.472 03:21:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:18.472 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.472 03:21:49 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:18.731 03:21:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:18.731 03:21:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BrC4hOR3N3 00:27:18.731 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BrC4hOR3N3 00:27:18.989 03:21:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.awp8WKNWlH 00:27:18.989 03:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.awp8WKNWlH 00:27:19.247 03:21:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:19.247 03:21:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:19.247 nvme0n1 00:27:19.506 03:21:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:19.506 03:21:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:19.506 03:21:50 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:19.506 "subsystems": [ 00:27:19.506 { 00:27:19.506 "subsystem": "keyring", 00:27:19.506 "config": [ 00:27:19.506 { 00:27:19.506 "method": "keyring_file_add_key", 00:27:19.506 "params": { 00:27:19.506 "name": "key0", 00:27:19.506 "path": "/tmp/tmp.BrC4hOR3N3" 00:27:19.506 } 00:27:19.506 }, 00:27:19.506 { 00:27:19.506 "method": "keyring_file_add_key", 00:27:19.506 "params": { 00:27:19.506 "name": "key1", 00:27:19.506 "path": "/tmp/tmp.awp8WKNWlH" 00:27:19.506 } 00:27:19.506 } 00:27:19.506 ] 00:27:19.506 }, 00:27:19.506 { 00:27:19.506 "subsystem": "iobuf", 00:27:19.506 "config": [ 00:27:19.506 { 00:27:19.506 "method": "iobuf_set_options", 00:27:19.506 "params": { 00:27:19.506 "small_pool_count": 8192, 00:27:19.506 "large_pool_count": 1024, 00:27:19.506 "small_bufsize": 8192, 00:27:19.506 "large_bufsize": 135168 00:27:19.506 } 00:27:19.506 } 00:27:19.506 ] 00:27:19.506 }, 00:27:19.506 { 00:27:19.506 "subsystem": "sock", 00:27:19.506 "config": [ 00:27:19.506 { 00:27:19.506 "method": "sock_impl_set_options", 00:27:19.506 "params": { 00:27:19.506 "impl_name": "posix", 00:27:19.506 "recv_buf_size": 2097152, 00:27:19.507 "send_buf_size": 2097152, 00:27:19.507 "enable_recv_pipe": true, 00:27:19.507 "enable_quickack": false, 00:27:19.507 "enable_placement_id": 0, 00:27:19.507 "enable_zerocopy_send_server": true, 00:27:19.507 "enable_zerocopy_send_client": false, 00:27:19.507 "zerocopy_threshold": 0, 00:27:19.507 "tls_version": 0, 00:27:19.507 "enable_ktls": false 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "sock_impl_set_options", 00:27:19.507 "params": { 00:27:19.507 "impl_name": "ssl", 00:27:19.507 "recv_buf_size": 4096, 00:27:19.507 "send_buf_size": 4096, 00:27:19.507 "enable_recv_pipe": true, 00:27:19.507 "enable_quickack": false, 00:27:19.507 "enable_placement_id": 0, 00:27:19.507 "enable_zerocopy_send_server": true, 00:27:19.507 "enable_zerocopy_send_client": false, 00:27:19.507 "zerocopy_threshold": 0, 00:27:19.507 "tls_version": 0, 00:27:19.507 "enable_ktls": false 00:27:19.507 } 00:27:19.507 } 00:27:19.507 ] 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "subsystem": "vmd", 00:27:19.507 "config": [] 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "subsystem": "accel", 00:27:19.507 "config": [ 00:27:19.507 { 00:27:19.507 "method": "accel_set_options", 00:27:19.507 "params": { 00:27:19.507 "small_cache_size": 128, 00:27:19.507 "large_cache_size": 16, 00:27:19.507 "task_count": 2048, 00:27:19.507 "sequence_count": 2048, 00:27:19.507 "buf_count": 2048 00:27:19.507 } 00:27:19.507 } 00:27:19.507 ] 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "subsystem": "bdev", 00:27:19.507 "config": [ 00:27:19.507 { 00:27:19.507 "method": "bdev_set_options", 00:27:19.507 "params": { 00:27:19.507 "bdev_io_pool_size": 65535, 00:27:19.507 "bdev_io_cache_size": 256, 00:27:19.507 "bdev_auto_examine": true, 00:27:19.507 "iobuf_small_cache_size": 128, 00:27:19.507 "iobuf_large_cache_size": 16 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_raid_set_options", 00:27:19.507 "params": { 00:27:19.507 "process_window_size_kb": 1024 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_iscsi_set_options", 00:27:19.507 "params": { 00:27:19.507 "timeout_sec": 30 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_nvme_set_options", 00:27:19.507 "params": { 00:27:19.507 "action_on_timeout": "none", 00:27:19.507 "timeout_us": 0, 00:27:19.507 "timeout_admin_us": 0, 00:27:19.507 "keep_alive_timeout_ms": 10000, 00:27:19.507 "arbitration_burst": 0, 00:27:19.507 "low_priority_weight": 0, 00:27:19.507 "medium_priority_weight": 0, 00:27:19.507 "high_priority_weight": 0, 00:27:19.507 "nvme_adminq_poll_period_us": 10000, 00:27:19.507 "nvme_ioq_poll_period_us": 0, 00:27:19.507 "io_queue_requests": 512, 00:27:19.507 "delay_cmd_submit": true, 00:27:19.507 "transport_retry_count": 4, 00:27:19.507 "bdev_retry_count": 3, 00:27:19.507 "transport_ack_timeout": 0, 00:27:19.507 "ctrlr_loss_timeout_sec": 0, 00:27:19.507 "reconnect_delay_sec": 0, 00:27:19.507 "fast_io_fail_timeout_sec": 0, 00:27:19.507 "disable_auto_failback": false, 00:27:19.507 "generate_uuids": false, 00:27:19.507 "transport_tos": 0, 00:27:19.507 "nvme_error_stat": false, 00:27:19.507 "rdma_srq_size": 0, 00:27:19.507 "io_path_stat": false, 00:27:19.507 "allow_accel_sequence": false, 00:27:19.507 "rdma_max_cq_size": 0, 00:27:19.507 "rdma_cm_event_timeout_ms": 0, 00:27:19.507 "dhchap_digests": [ 00:27:19.507 "sha256", 00:27:19.507 "sha384", 00:27:19.507 "sha512" 00:27:19.507 ], 00:27:19.507 "dhchap_dhgroups": [ 00:27:19.507 "null", 00:27:19.507 "ffdhe2048", 00:27:19.507 "ffdhe3072", 00:27:19.507 "ffdhe4096", 00:27:19.507 "ffdhe6144", 00:27:19.507 "ffdhe8192" 00:27:19.507 ] 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_nvme_attach_controller", 00:27:19.507 "params": { 00:27:19.507 "name": "nvme0", 00:27:19.507 "trtype": "TCP", 00:27:19.507 "adrfam": "IPv4", 00:27:19.507 "traddr": "127.0.0.1", 00:27:19.507 "trsvcid": "4420", 00:27:19.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.507 "prchk_reftag": false, 00:27:19.507 "prchk_guard": false, 00:27:19.507 "ctrlr_loss_timeout_sec": 0, 00:27:19.507 "reconnect_delay_sec": 0, 00:27:19.507 "fast_io_fail_timeout_sec": 0, 00:27:19.507 "psk": "key0", 00:27:19.507 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.507 "hdgst": false, 00:27:19.507 "ddgst": false 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_nvme_set_hotplug", 00:27:19.507 "params": { 00:27:19.507 "period_us": 100000, 00:27:19.507 "enable": false 00:27:19.507 } 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "method": "bdev_wait_for_examine" 00:27:19.507 } 00:27:19.507 ] 00:27:19.507 }, 00:27:19.507 { 00:27:19.507 "subsystem": "nbd", 00:27:19.507 "config": [] 00:27:19.507 } 00:27:19.507 ] 00:27:19.507 }' 00:27:19.507 03:21:50 keyring_file -- keyring/file.sh@114 -- # killprocess 1213349 00:27:19.507 03:21:50 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1213349 ']' 00:27:19.507 03:21:50 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1213349 00:27:19.507 03:21:50 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1213349 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1213349' 00:27:19.767 killing process with pid 1213349 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@965 -- # kill 1213349 00:27:19.767 Received shutdown signal, test time was about 1.000000 seconds 00:27:19.767 00:27:19.767 Latency(us) 00:27:19.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.767 =================================================================================================================== 00:27:19.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@970 -- # wait 1213349 00:27:19.767 03:21:50 keyring_file -- keyring/file.sh@117 -- # bperfpid=1214864 00:27:19.767 03:21:50 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1214864 /var/tmp/bperf.sock 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1214864 ']' 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:19.767 03:21:50 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.767 03:21:50 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:19.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:19.767 03:21:50 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:19.767 "subsystems": [ 00:27:19.767 { 00:27:19.767 "subsystem": "keyring", 00:27:19.767 "config": [ 00:27:19.767 { 00:27:19.767 "method": "keyring_file_add_key", 00:27:19.767 "params": { 00:27:19.767 "name": "key0", 00:27:19.767 "path": "/tmp/tmp.BrC4hOR3N3" 00:27:19.767 } 00:27:19.767 }, 00:27:19.767 { 00:27:19.767 "method": "keyring_file_add_key", 00:27:19.767 "params": { 00:27:19.767 "name": "key1", 00:27:19.767 "path": "/tmp/tmp.awp8WKNWlH" 00:27:19.767 } 00:27:19.767 } 00:27:19.767 ] 00:27:19.767 }, 00:27:19.767 { 00:27:19.767 "subsystem": "iobuf", 00:27:19.767 "config": [ 00:27:19.767 { 00:27:19.767 "method": "iobuf_set_options", 00:27:19.767 "params": { 00:27:19.767 "small_pool_count": 8192, 00:27:19.767 "large_pool_count": 1024, 00:27:19.768 "small_bufsize": 8192, 00:27:19.768 "large_bufsize": 135168 00:27:19.768 } 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "subsystem": "sock", 00:27:19.768 "config": [ 00:27:19.768 { 00:27:19.768 "method": "sock_impl_set_options", 00:27:19.768 "params": { 00:27:19.768 "impl_name": "posix", 00:27:19.768 "recv_buf_size": 2097152, 00:27:19.768 "send_buf_size": 2097152, 00:27:19.768 "enable_recv_pipe": true, 00:27:19.768 "enable_quickack": false, 00:27:19.768 "enable_placement_id": 0, 00:27:19.768 "enable_zerocopy_send_server": true, 00:27:19.768 "enable_zerocopy_send_client": false, 00:27:19.768 "zerocopy_threshold": 0, 00:27:19.768 "tls_version": 0, 00:27:19.768 "enable_ktls": false 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "sock_impl_set_options", 00:27:19.768 "params": { 00:27:19.768 "impl_name": "ssl", 00:27:19.768 "recv_buf_size": 4096, 00:27:19.768 "send_buf_size": 4096, 00:27:19.768 "enable_recv_pipe": true, 00:27:19.768 "enable_quickack": false, 00:27:19.768 "enable_placement_id": 0, 00:27:19.768 "enable_zerocopy_send_server": true, 00:27:19.768 "enable_zerocopy_send_client": false, 00:27:19.768 "zerocopy_threshold": 0, 00:27:19.768 "tls_version": 0, 00:27:19.768 "enable_ktls": false 00:27:19.768 } 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "subsystem": "vmd", 00:27:19.768 "config": [] 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "subsystem": "accel", 00:27:19.768 "config": [ 00:27:19.768 { 00:27:19.768 "method": "accel_set_options", 00:27:19.768 "params": { 00:27:19.768 "small_cache_size": 128, 00:27:19.768 "large_cache_size": 16, 00:27:19.768 "task_count": 2048, 00:27:19.768 "sequence_count": 2048, 00:27:19.768 "buf_count": 2048 00:27:19.768 } 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "subsystem": "bdev", 00:27:19.768 "config": [ 00:27:19.768 { 00:27:19.768 "method": "bdev_set_options", 00:27:19.768 "params": { 00:27:19.768 "bdev_io_pool_size": 65535, 00:27:19.768 "bdev_io_cache_size": 256, 00:27:19.768 "bdev_auto_examine": true, 00:27:19.768 "iobuf_small_cache_size": 128, 00:27:19.768 "iobuf_large_cache_size": 16 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_raid_set_options", 00:27:19.768 "params": { 00:27:19.768 "process_window_size_kb": 1024 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_iscsi_set_options", 00:27:19.768 "params": { 00:27:19.768 "timeout_sec": 30 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_nvme_set_options", 00:27:19.768 "params": { 00:27:19.768 "action_on_timeout": "none", 00:27:19.768 "timeout_us": 0, 00:27:19.768 "timeout_admin_us": 0, 00:27:19.768 "keep_alive_timeout_ms": 10000, 00:27:19.768 "arbitration_burst": 0, 00:27:19.768 "low_priority_weight": 0, 00:27:19.768 "medium_priority_weight": 0, 00:27:19.768 "high_priority_weight": 0, 00:27:19.768 "nvme_adminq_poll_period_us": 10000, 00:27:19.768 "nvme_ioq_poll_period_us": 0, 00:27:19.768 "io_queue_requests": 512, 00:27:19.768 "delay_cmd_submit": true, 00:27:19.768 "transport_retry_count": 4, 00:27:19.768 "bdev_retry_count": 3, 00:27:19.768 "transport_ack_timeout": 0, 00:27:19.768 "ctrlr_loss_timeout_sec": 0, 00:27:19.768 "reconnect_delay_sec": 0, 00:27:19.768 "fast_io_fail_timeout_sec": 0, 00:27:19.768 "disable_auto_failback": false, 00:27:19.768 "generate_uuids": false, 00:27:19.768 "transport_tos": 0, 00:27:19.768 "nvme_error_stat": false, 00:27:19.768 "rdma_srq_size": 0, 00:27:19.768 "io_path_stat": false, 00:27:19.768 "allow_accel_sequence": false, 00:27:19.768 "rdma_max_cq_size": 0, 00:27:19.768 "rdma_cm_event_timeout_ms": 0, 00:27:19.768 "dhchap_digests": [ 00:27:19.768 "sha256", 00:27:19.768 "sha384", 00:27:19.768 "sha512" 00:27:19.768 ], 00:27:19.768 "dhchap_dhgroups": [ 00:27:19.768 "null", 00:27:19.768 "ffdhe2048", 00:27:19.768 "ffdhe3072", 00:27:19.768 "ffdhe4096", 00:27:19.768 "ffdhe6144", 00:27:19.768 "ffdhe8192" 00:27:19.768 ] 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_nvme_attach_controller", 00:27:19.768 "params": { 00:27:19.768 "name": "nvme0", 00:27:19.768 "trtype": "TCP", 00:27:19.768 "adrfam": "IPv4", 00:27:19.768 "traddr": "127.0.0.1", 00:27:19.768 "trsvcid": "4420", 00:27:19.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.768 "prchk_reftag": false, 00:27:19.768 "prchk_guard": false, 00:27:19.768 "ctrlr_loss_timeout_sec": 0, 00:27:19.768 "reconnect_delay_sec": 0, 00:27:19.768 "fast_io_fail_timeout_sec": 0, 00:27:19.768 "psk": "key0", 00:27:19.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.768 "hdgst": false, 00:27:19.768 "ddgst": false 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_nvme_set_hotplug", 00:27:19.768 "params": { 00:27:19.768 "period_us": 100000, 00:27:19.768 "enable": false 00:27:19.768 } 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "method": "bdev_wait_for_examine" 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }, 00:27:19.768 { 00:27:19.768 "subsystem": "nbd", 00:27:19.768 "config": [] 00:27:19.768 } 00:27:19.768 ] 00:27:19.768 }' 00:27:19.768 03:21:50 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.768 03:21:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.028 [2024-05-15 03:21:50.961502] Starting SPDK v24.05-pre git sha1 2b14ffc34 / DPDK 23.11.0 initialization... 00:27:20.028 [2024-05-15 03:21:50.961552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214864 ] 00:27:20.028 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.028 [2024-05-15 03:21:51.014805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.028 [2024-05-15 03:21:51.085923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.287 [2024-05-15 03:21:51.236399] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:20.854 03:21:51 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.854 03:21:51 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:27:20.854 03:21:51 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:20.854 03:21:51 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.854 03:21:51 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:20.854 03:21:51 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.854 03:21:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:21.113 03:21:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:21.113 03:21:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:21.113 03:21:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:21.113 03:21:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:21.113 03:21:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.113 03:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.113 03:21:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:21.372 03:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BrC4hOR3N3 /tmp/tmp.awp8WKNWlH 00:27:21.372 03:21:52 keyring_file -- keyring/file.sh@20 -- # killprocess 1214864 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1214864 ']' 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1214864 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1214864 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1214864' 00:27:21.372 killing process with pid 1214864 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@965 -- # kill 1214864 00:27:21.372 Received shutdown signal, test time was about 1.000000 seconds 00:27:21.372 00:27:21.372 Latency(us) 00:27:21.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.372 =================================================================================================================== 00:27:21.372 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:21.372 03:21:52 keyring_file -- common/autotest_common.sh@970 -- # wait 1214864 00:27:21.631 03:21:52 keyring_file -- keyring/file.sh@21 -- # killprocess 1213190 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1213190 ']' 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1213190 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@951 -- # uname 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1213190 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1213190' 00:27:21.631 killing process with pid 1213190 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@965 -- # kill 1213190 00:27:21.631 [2024-05-15 03:21:52.755621] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:21.631 [2024-05-15 03:21:52.755656] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:21.631 03:21:52 keyring_file -- common/autotest_common.sh@970 -- # wait 1213190 00:27:22.199 00:27:22.199 real 0m12.089s 00:27:22.199 user 0m28.642s 00:27:22.199 sys 0m2.778s 00:27:22.199 03:21:53 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:22.199 03:21:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:22.199 ************************************ 00:27:22.199 END TEST keyring_file 00:27:22.199 ************************************ 00:27:22.199 03:21:53 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:27:22.199 03:21:53 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:22.199 03:21:53 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:27:22.199 03:21:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:22.199 03:21:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:22.199 03:21:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:22.199 03:21:53 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:27:22.199 03:21:53 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:27:22.199 03:21:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:22.199 03:21:53 -- common/autotest_common.sh@10 -- # set +x 00:27:22.199 03:21:53 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:27:22.199 03:21:53 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:27:22.199 03:21:53 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:27:22.199 03:21:53 -- common/autotest_common.sh@10 -- # set +x 00:27:26.394 INFO: APP EXITING 00:27:26.394 INFO: killing all VMs 00:27:26.394 INFO: killing vhost app 00:27:26.394 INFO: EXIT DONE 00:27:28.926 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:27:28.926 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:27:28.926 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:27:31.461 Cleaning 00:27:31.461 Removing: /var/run/dpdk/spdk0/config 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:31.461 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:31.461 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:31.461 Removing: /var/run/dpdk/spdk1/config 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:31.461 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:31.461 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:31.461 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:31.461 Removing: /var/run/dpdk/spdk2/config 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:31.461 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:31.461 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:31.461 Removing: /var/run/dpdk/spdk3/config 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:31.461 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:31.461 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:31.461 Removing: /var/run/dpdk/spdk4/config 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:31.461 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:31.461 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:31.461 Removing: /dev/shm/bdev_svc_trace.1 00:27:31.461 Removing: /dev/shm/nvmf_trace.0 00:27:31.461 Removing: /dev/shm/spdk_tgt_trace.pid855194 00:27:31.461 Removing: /var/run/dpdk/spdk0 00:27:31.461 Removing: /var/run/dpdk/spdk1 00:27:31.461 Removing: /var/run/dpdk/spdk2 00:27:31.461 Removing: /var/run/dpdk/spdk3 00:27:31.461 Removing: /var/run/dpdk/spdk4 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1002502 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1011377 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1013155 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1014133 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1031242 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1035020 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1039509 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1041115 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1042954 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1043186 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1043421 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1043667 00:27:31.461 Removing: /var/run/dpdk/spdk_pid1044177 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1046011 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1046999 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1047501 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1049819 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1050441 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1051120 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1055310 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1065777 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1069811 00:27:31.720 Removing: /var/run/dpdk/spdk_pid1075766 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1077094 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1078642 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1082945 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1087116 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1094544 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1094546 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1099252 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1099457 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1099631 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1099954 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1100143 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1104405 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1104874 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1109099 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1112021 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1117992 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1123334 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1131890 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1139053 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1139088 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1156757 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1157402 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1158097 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1158788 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1159764 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1160369 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1161456 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1162153 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1166406 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1166639 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1172694 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1172796 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1175156 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1182713 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1182796 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1187717 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1189683 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1191651 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1192913 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1194887 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1196040 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1205200 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1205667 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1206332 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1208496 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1209058 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1209522 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1213190 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1213349 00:27:31.721 Removing: /var/run/dpdk/spdk_pid1214864 00:27:31.721 Removing: /var/run/dpdk/spdk_pid853050 00:27:31.721 Removing: /var/run/dpdk/spdk_pid854121 00:27:31.721 Removing: /var/run/dpdk/spdk_pid855194 00:27:31.721 Removing: /var/run/dpdk/spdk_pid855834 00:27:31.721 Removing: /var/run/dpdk/spdk_pid856778 00:27:31.981 Removing: /var/run/dpdk/spdk_pid857025 00:27:31.981 Removing: /var/run/dpdk/spdk_pid857996 00:27:31.981 Removing: /var/run/dpdk/spdk_pid858230 00:27:31.981 Removing: /var/run/dpdk/spdk_pid858563 00:27:31.981 Removing: /var/run/dpdk/spdk_pid860069 00:27:31.981 Removing: /var/run/dpdk/spdk_pid861115 00:27:31.981 Removing: /var/run/dpdk/spdk_pid861405 00:27:31.981 Removing: /var/run/dpdk/spdk_pid861834 00:27:31.982 Removing: /var/run/dpdk/spdk_pid862202 00:27:31.982 Removing: /var/run/dpdk/spdk_pid862489 00:27:31.982 Removing: /var/run/dpdk/spdk_pid862745 00:27:31.982 Removing: /var/run/dpdk/spdk_pid862993 00:27:31.982 Removing: /var/run/dpdk/spdk_pid863272 00:27:31.982 Removing: /var/run/dpdk/spdk_pid864052 00:27:31.982 Removing: /var/run/dpdk/spdk_pid867006 00:27:31.982 Removing: /var/run/dpdk/spdk_pid867270 00:27:31.982 Removing: /var/run/dpdk/spdk_pid867556 00:27:31.982 Removing: /var/run/dpdk/spdk_pid867753 00:27:31.982 Removing: /var/run/dpdk/spdk_pid868080 00:27:31.982 Removing: /var/run/dpdk/spdk_pid868270 00:27:31.982 Removing: /var/run/dpdk/spdk_pid868756 00:27:31.982 Removing: /var/run/dpdk/spdk_pid868821 00:27:31.982 Removing: /var/run/dpdk/spdk_pid869113 00:27:31.982 Removing: /var/run/dpdk/spdk_pid869262 00:27:31.982 Removing: /var/run/dpdk/spdk_pid869520 00:27:31.982 Removing: /var/run/dpdk/spdk_pid869674 00:27:31.982 Removing: /var/run/dpdk/spdk_pid870092 00:27:31.982 Removing: /var/run/dpdk/spdk_pid870345 00:27:31.982 Removing: /var/run/dpdk/spdk_pid870633 00:27:31.982 Removing: /var/run/dpdk/spdk_pid870899 00:27:31.982 Removing: /var/run/dpdk/spdk_pid871023 00:27:31.982 Removing: /var/run/dpdk/spdk_pid871202 00:27:31.982 Removing: /var/run/dpdk/spdk_pid871458 00:27:31.982 Removing: /var/run/dpdk/spdk_pid871703 00:27:31.982 Removing: /var/run/dpdk/spdk_pid871952 00:27:31.982 Removing: /var/run/dpdk/spdk_pid872206 00:27:31.982 Removing: /var/run/dpdk/spdk_pid872453 00:27:31.982 Removing: /var/run/dpdk/spdk_pid872708 00:27:31.982 Removing: /var/run/dpdk/spdk_pid872960 00:27:31.982 Removing: /var/run/dpdk/spdk_pid873294 00:27:31.982 Removing: /var/run/dpdk/spdk_pid873584 00:27:31.982 Removing: /var/run/dpdk/spdk_pid873832 00:27:31.982 Removing: /var/run/dpdk/spdk_pid874253 00:27:31.982 Removing: /var/run/dpdk/spdk_pid874717 00:27:31.982 Removing: /var/run/dpdk/spdk_pid874978 00:27:31.982 Removing: /var/run/dpdk/spdk_pid875241 00:27:31.982 Removing: /var/run/dpdk/spdk_pid875508 00:27:31.982 Removing: /var/run/dpdk/spdk_pid875805 00:27:31.982 Removing: /var/run/dpdk/spdk_pid876096 00:27:31.982 Removing: /var/run/dpdk/spdk_pid876388 00:27:31.982 Removing: /var/run/dpdk/spdk_pid876677 00:27:31.982 Removing: /var/run/dpdk/spdk_pid876948 00:27:31.982 Removing: /var/run/dpdk/spdk_pid877020 00:27:31.982 Removing: /var/run/dpdk/spdk_pid877335 00:27:31.982 Removing: /var/run/dpdk/spdk_pid881187 00:27:31.982 Removing: /var/run/dpdk/spdk_pid924890 00:27:31.982 Removing: /var/run/dpdk/spdk_pid929520 00:27:31.982 Removing: /var/run/dpdk/spdk_pid939574 00:27:31.982 Removing: /var/run/dpdk/spdk_pid944970 00:27:31.982 Removing: /var/run/dpdk/spdk_pid948963 00:27:31.982 Removing: /var/run/dpdk/spdk_pid949631 00:27:31.982 Removing: /var/run/dpdk/spdk_pid961004 00:27:31.982 Removing: /var/run/dpdk/spdk_pid961092 00:27:31.982 Removing: /var/run/dpdk/spdk_pid961851 00:27:31.982 Removing: /var/run/dpdk/spdk_pid962766 00:27:31.982 Removing: /var/run/dpdk/spdk_pid963683 00:27:32.329 Removing: /var/run/dpdk/spdk_pid964154 00:27:32.329 Removing: /var/run/dpdk/spdk_pid964344 00:27:32.329 Removing: /var/run/dpdk/spdk_pid964604 00:27:32.329 Removing: /var/run/dpdk/spdk_pid964619 00:27:32.329 Removing: /var/run/dpdk/spdk_pid964621 00:27:32.329 Removing: /var/run/dpdk/spdk_pid965537 00:27:32.329 Removing: /var/run/dpdk/spdk_pid966447 00:27:32.329 Removing: /var/run/dpdk/spdk_pid967370 00:27:32.329 Removing: /var/run/dpdk/spdk_pid967836 00:27:32.329 Removing: /var/run/dpdk/spdk_pid967843 00:27:32.329 Removing: /var/run/dpdk/spdk_pid968180 00:27:32.329 Removing: /var/run/dpdk/spdk_pid969506 00:27:32.329 Removing: /var/run/dpdk/spdk_pid971027 00:27:32.329 Removing: /var/run/dpdk/spdk_pid979360 00:27:32.329 Removing: /var/run/dpdk/spdk_pid979698 00:27:32.329 Removing: /var/run/dpdk/spdk_pid983874 00:27:32.329 Removing: /var/run/dpdk/spdk_pid989727 00:27:32.329 Removing: /var/run/dpdk/spdk_pid992318 00:27:32.329 Clean 00:27:32.329 03:22:03 -- common/autotest_common.sh@1447 -- # return 0 00:27:32.329 03:22:03 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:27:32.329 03:22:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.329 03:22:03 -- common/autotest_common.sh@10 -- # set +x 00:27:32.329 03:22:03 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:27:32.329 03:22:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.329 03:22:03 -- common/autotest_common.sh@10 -- # set +x 00:27:32.329 03:22:03 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:32.329 03:22:03 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:32.329 03:22:03 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:32.329 03:22:03 -- spdk/autotest.sh@387 -- # hash lcov 00:27:32.329 03:22:03 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:32.329 03:22:03 -- spdk/autotest.sh@389 -- # hostname 00:27:32.329 03:22:03 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:32.329 geninfo: WARNING: invalid characters removed from testname! 00:27:50.417 03:22:20 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:52.321 03:22:23 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:54.224 03:22:25 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:56.127 03:22:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:57.502 03:22:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:59.404 03:22:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:01.306 03:22:32 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:01.306 03:22:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.306 03:22:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:01.306 03:22:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.306 03:22:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.306 03:22:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.306 03:22:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.306 03:22:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.306 03:22:32 -- paths/export.sh@5 -- $ export PATH 00:28:01.306 03:22:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.306 03:22:32 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:01.306 03:22:32 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:01.306 03:22:32 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715736152.XXXXXX 00:28:01.306 03:22:32 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715736152.0AnJIK 00:28:01.306 03:22:32 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:01.306 03:22:32 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:28:01.306 03:22:32 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:01.306 03:22:32 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:01.306 03:22:32 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:01.306 03:22:32 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:01.306 03:22:32 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:01.306 03:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:28:01.306 03:22:32 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:01.306 03:22:32 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:01.306 03:22:32 -- pm/common@17 -- $ local monitor 00:28:01.307 03:22:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:01.307 03:22:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:01.307 03:22:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:01.307 03:22:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:01.307 03:22:32 -- pm/common@21 -- $ date +%s 00:28:01.307 03:22:32 -- pm/common@25 -- $ sleep 1 00:28:01.307 03:22:32 -- pm/common@21 -- $ date +%s 00:28:01.307 03:22:32 -- pm/common@21 -- $ date +%s 00:28:01.307 03:22:32 -- pm/common@21 -- $ date +%s 00:28:01.307 03:22:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736152 00:28:01.307 03:22:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736152 00:28:01.307 03:22:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736152 00:28:01.307 03:22:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715736152 00:28:01.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736152_collect-vmstat.pm.log 00:28:01.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736152_collect-cpu-load.pm.log 00:28:01.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736152_collect-cpu-temp.pm.log 00:28:01.307 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715736152_collect-bmc-pm.bmc.pm.log 00:28:02.243 03:22:33 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:28:02.243 03:22:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:28:02.243 03:22:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:02.243 03:22:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:02.243 03:22:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:02.243 03:22:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:02.243 03:22:33 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:02.243 03:22:33 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:02.243 03:22:33 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:02.501 03:22:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:02.501 03:22:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:02.501 03:22:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:02.501 03:22:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:02.501 03:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.501 03:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:02.501 03:22:33 -- pm/common@44 -- $ pid=1224320 00:28:02.501 03:22:33 -- pm/common@50 -- $ kill -TERM 1224320 00:28:02.501 03:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.501 03:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:02.501 03:22:33 -- pm/common@44 -- $ pid=1224321 00:28:02.501 03:22:33 -- pm/common@50 -- $ kill -TERM 1224321 00:28:02.501 03:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.501 03:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:02.501 03:22:33 -- pm/common@44 -- $ pid=1224323 00:28:02.501 03:22:33 -- pm/common@50 -- $ kill -TERM 1224323 00:28:02.501 03:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.501 03:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:02.501 03:22:33 -- pm/common@44 -- $ pid=1224351 00:28:02.501 03:22:33 -- pm/common@50 -- $ sudo -E kill -TERM 1224351 00:28:02.501 + [[ -n 750303 ]] 00:28:02.501 + sudo kill 750303 00:28:02.508 [Pipeline] } 00:28:02.519 [Pipeline] // stage 00:28:02.523 [Pipeline] } 00:28:02.534 [Pipeline] // timeout 00:28:02.537 [Pipeline] } 00:28:02.547 [Pipeline] // catchError 00:28:02.551 [Pipeline] } 00:28:02.562 [Pipeline] // wrap 00:28:02.566 [Pipeline] } 00:28:02.578 [Pipeline] // catchError 00:28:02.584 [Pipeline] stage 00:28:02.586 [Pipeline] { (Epilogue) 00:28:02.597 [Pipeline] catchError 00:28:02.598 [Pipeline] { 00:28:02.608 [Pipeline] echo 00:28:02.609 Cleanup processes 00:28:02.613 [Pipeline] sh 00:28:02.891 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:02.891 1224433 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:02.891 1224718 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:02.902 [Pipeline] sh 00:28:03.180 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:03.180 ++ grep -v 'sudo pgrep' 00:28:03.180 ++ awk '{print $1}' 00:28:03.180 + sudo kill -9 1224433 00:28:03.190 [Pipeline] sh 00:28:03.467 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:13.480 [Pipeline] sh 00:28:13.762 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:13.762 Artifacts sizes are good 00:28:13.775 [Pipeline] archiveArtifacts 00:28:13.781 Archiving artifacts 00:28:13.934 [Pipeline] sh 00:28:14.219 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:14.233 [Pipeline] cleanWs 00:28:14.242 [WS-CLEANUP] Deleting project workspace... 00:28:14.242 [WS-CLEANUP] Deferred wipeout is used... 00:28:14.248 [WS-CLEANUP] done 00:28:14.250 [Pipeline] } 00:28:14.271 [Pipeline] // catchError 00:28:14.281 [Pipeline] sh 00:28:14.561 + logger -p user.info -t JENKINS-CI 00:28:14.570 [Pipeline] } 00:28:14.585 [Pipeline] // stage 00:28:14.591 [Pipeline] } 00:28:14.607 [Pipeline] // node 00:28:14.612 [Pipeline] End of Pipeline 00:28:14.646 Finished: SUCCESS